PG-Strom is an extension of PostgreSQL that utilizes GPUs and NVMe SSDs to enable terabyte-scale data processing and in-database analytics. It features SSD-to-GPU Direct SQL, which loads data directly from NVMe SSDs to GPUs using RDMA, bypassing CPU and RAM. This improves query performance by reducing I/O traffic over the PCIe bus. PG-Strom also uses Apache Arrow columnar storage format to further boost performance by transferring only referenced columns and enabling vector processing on GPUs. Benchmark results show PG-Strom can process over a billion rows per second on a simple 1U server configuration with an NVIDIA GPU and multiple NVMe SSDs.
CHEAP Call Girls in Pushp Vihar (-DELHI )đ 9953056974đ(=)/CALL GIRLS SERVICE
Â
20201006_PGconf_Online_Large_Data_Processing
1. Data processing more than billion rows per second
ïœunleash full capability of GPU and NVMEïœ
HeteroDB,Inc
Chief Architect & CEO
KaiGai Kohei <kaigai@heterodb.com>
3. What is PG-Strom?
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second3
ï° Transparent GPU acceleration for analytics and reporting workloads
ï° Binary code generation by JIT from SQL statements
ï° PCIe-bus level optimization by SSD-to-GPU Direct SQL technology
ï° Columnar-storage for efficient I/O and vector processors
PG-Strom is an extension of PostgreSQL for terabytes scale data-processing
and in-database analytics, by utilization of GPU and NVME-SSD.
App
GPU
off-loading
for IoT/Big-Data
for ML/Analytics
âą SSD-to-GPU Direct SQL
âą Columnar Store (Arrow_Fdw)
âą Asymmetric Partition-wise
JOIN/GROUP BY
âą BRIN-Index Support
âą NVME-over-Fabric Support
âą PostGIS + GiST Support (WIP)
âą GPU Memory Store(WIP)
âą Procedural Language for
GPU native code.
âą IPC on GPU device memory
over cuPy data frame.
4. PG-Strom as an open source project
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second4
Official documentation in English / Japanese
http://heterodb.github.io/pg-strom/
Many stars âș
Distributed under GPL-v2.0
Has been developed since 2012
âSupported PostgreSQL versions
ï° PostgreSQL v12, v11, and v10. WIP for v13 support
âRelated contributions to PostgreSQL
ï° Writable FDW support (9.3)
ï° Custom-Scan interface (9.5)
ï° FDW and Custom-Scan JOIN pushdown (9.5)
âTalks at PostgreSQL community
ï° PGconf.EU 2012 (Prague), 2015 (Vienna), 2018 (Lisbon)
ï° PGconf.ASIA 2018 (Tokyo), 2019 (Bali, Indonesia)
ï° PGconf.SV 2016 (San Francisco)
ï° PGconf.China 2015 (Beijing)
6. Target: IoT/M2M Log-data processing
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second6
Manufacturing Logistics Mobile Home electronics
ï° Any kind of devices generate various form of log-data.
ï° People want/try to find out insight from the data.
ï° Data importing and processing must be rapid.
ï° System administration should be simple and easy.
JBoF: Just Bunch of Flash
NVME over
Fabric
(RDMA)
DB Admin
BI Tools
Machine-learning
applications
e.g, anomaly detection
shared
data-frame
PG-Strom
7. How much data size we expect? (1/3)
âThis is a factory of an international top-brand company.
âMost of users donât have bigger data than them.
Case: A semiconductor factory managed up to 100TB with PostgreSQL
Total 20TB
Per Year
Kept for
5 years
Defect
Investigation
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second7
8. How much data size we expect? (2/3)
Nowadays, 2U server is capable for 100TB capacity
model Supermicro 2029U-TN24R4T Qty
CPU Intel Xeon Gold 6226 (12C, 2.7GHz) 2
RAM 32GB RDIMM (DDR4-2933, ECC) 12
GPU NVIDIA Tesla P40 (3840C, 24GB) 2
HDD Seagate 1.0TB SATA (7.2krpm) 1
NVME Intel DC P4510 (8.0TB, U.2) 24
N/W built-in 10GBase-T 4
8.0TB x 24 = 192TB
How much is it?
60,932USD
by thinkmate.com
(31st-Aug-2019)
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second8
9. How much data size we expect? (3/3)
On the other hands, it is not small enough.
TB
TB
TB
B
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second9
10. Weapons to tackle terabytes scale data
âą SSD-to-GPU Direct SQL
Efficient
Storage
âą Arrow_Fdw
Efficient
Data Structure
âą Table Partitioning
Efficient
Data
Deployment
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second10
11. PG-Strom: SSD-to-GPU Direct SQL
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second11
Efficient Storage
12. GPUâs characteristics - mostly as a computing accelerator
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second12
Over 10years history in HPC, then massive popularization in Machine-Learning
NVIDIA Tesla V100
Super Computer
(TITEC; TSUBAME3.0) Computer Graphics Machine-Learning
How I/O workloads are accelerated by GPU that is a computing accelerator?
Simulation
13. A usual composition of x86_64 server
GPUSSD
CPU
RAM
HDD
N/W
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second13
14. Data flow to process a massive amount of data
CPU RAM
SSD GPU
PCIe
PostgreSQL
Data Blocks
Normal Data Flow
All the records, including junks, must be loaded
onto RAM once, because software cannot check
necessity of the rows prior to the data loading.
So, amount of the I/O traffic over PCIe bus tends
to be large.
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second14
Unless records are not loaded to CPU/RAM once, over the PCIe bus,
software cannot check its necessity even if they are âjunkâ records.
15. SSD-to-GPU Direct SQL (1/4) â Overview
CPU RAM
SSD GPU
PCIe
PostgreSQL
Data Blocks
NVIDIA GPUDirect RDMA
It allows to load the data blocks on NVME-SSD
to GPU using peer-to-peer DMA over PCIe-bus;
bypassing CPU/RAM. WHERE-clause
JOIN
GROUP BY
Run SQL by GPU
to reduce the data size
Data Size: Small
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second15
16. Background - GPUDirect RDMA by NVIDIA
Physical
address space
PCIe BAR1 Area
GPU
device
memory
RAM
NVMe-SSD Infiniband
HBA
PCIe devices
GPUDirect RDMA
It enables to map GPU device
memory on physical address
space of the host system
Once âphysical address of GPU device memoryâ
appears, we can use is as source or destination
address of DMA with PCIe devices.
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second16
0xf0000000
0xe0000000
DMA Request
SRC: 1200th sector
LEN: 40 sectors
DST: 0xe0200000
17. SSD-to-GPU Direct SQL (2/4) â System configuration and workloads
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second17
Supermicro SYS-1019GP-TT
CPU Xeon Gold 6126T (2.6GHz, 12C) x1
RAM 192GB (32GB DDR4-2666 x 6)
GPU NVIDIA Tesla V100 (5120C, 16GB) x1
SSD
Intel SSD DC P4600 (HHHL; 2.0TB) x3
(striping configuration by md-raid0)
HDD 2.0TB(SATA; 72krpm) x6
Network 10Gb Ethernet 2ports
OS
Red Hat Enterprise Linux 7.6
CUDA 10.1 + NVIDIA Driver 418.40.04
DB
PostgreSQL v11.2
PG-Strom v2.2devel
â Query Example (Q2_3)
SELECT sum(lo_revenue), d_year, p_brand1
FROM lineorder, date1, part, supplier
WHERE lo_orderdate = d_datekey
AND lo_partkey = p_partkey
AND lo_suppkey = s_suppkey
AND p_category = 'MFGR#12â
AND s_region = 'AMERICAâ
GROUP BY d_year, p_brand1
ORDER BY d_year, p_brand1;
customer
12M rows
(1.6GB)
date1
2.5K rows
(400KB)
part
1.8M rows
(206MB)
supplier
4.0M rows
(528MB)
lineorder
2.4B rows
(351GB)
Summarizing queries for typical Star-Schema structure on simple 1U server
18. SSD-to-GPU Direct SQL (3/4) â Benchmark results
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second18
ï° Query Execution Throughput = (353GB; DB-size) / (Query response time [sec])
ï° SSD-to-GPU Direct SQL runs the workloads close to the hardware limitation (8.5GB/s)
ï° about x3 times faster than filesystem and CPU based mechanism
2343.7 2320.6 2315.8 2233.1 2223.9 2216.8 2167.8 2281.9 2277.6 2275.9 2172.0 2198.8 2233.3
7880.4 7924.7 7912.5
7725.8 7712.6 7827.2
7398.1 7258.0
7638.5 7750.4
7402.8 7419.6
7639.8
0
1,000
2,000
3,000
4,000
5,000
6,000
7,000
8,000
9,000
Q1_1 Q1_2 Q1_3 Q2_1 Q2_2 Q2_3 Q3_1 Q3_2 Q3_3 Q3_4 Q4_1 Q4_2 Q4_3
QueryExecutionThroughput[MB/s]
Star Schema Benchmark on 1xNVIDIA Tesla V100 + 3xIntel DC P4600
PostgreSQL 11.2 PG-Strom 2.2devel
19. SSD-to-GPU Direct SQL (4/4) â Software Stack
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second19
Filesystem
(ext4)
nvme_strom
kernel module
NVMe SSD
PostgreSQL
pg_strom
extension
read(2) ioctl(2)
Operating
System
Software
Layer
Database
Software
Layer
blk-mq
nvme pcie nvme rdma
Network HBA
NVMe
Request
Network HBANVMe SSD
NVME-over-Fabric Target
RDMA over
Converged
Ethernet
GPUDirect RDMA
Translation from logical location
to physical location on the drive
â Other software
â Software by HeteroDB
â Hardware
Special NVME READ command that
loads the source blocks on NVME-
SSD to GPUâs device memory.
External
Storage
Server
Local
Storage
Layer
20. PG-Strom: Arrow_Fdw (columnar store)
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second20
Efficient Data Strucute
21. Background: Apache Arrow (1/2)
âCharacteristics
ï° Column-oriented data format designed for analytics workloads
ï° A common data format for inter-application exchange
ï° Various primitive data types like integer, floating-point, date/time and so on
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second21
PostgreSQL / PG-Strom
NVIDIA GPU
22. Background: Apache Arrow (2/2)
Apache Arrow Data Types PostgreSQL Data Types extra description
Int int2, int4, int8
FloatingPoint float2, float4, float8 float2 is an enhancement by PG-Strom
Binary bytea
Utf8 text
Bool bool
Decimal numeric Decimal = fixed-point 128bit value
Date date adjusted to unitsz = Day
Time time adjusted to unitsz = MicroSecond
Timestamp timestamp adjusted to unitsz = MicroSecond
Interval interval
List array types Only 1-dimensional array is supportable
Struct composite types
Union ------
FixedSizeBinary char(n)
FixedSizeList ------
Map ------
Most of data types are convertible between Apache Arrow and PostgreSQL.
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second22
23. Log-data characteristics (1/2) â WHERE is it generated on?
ETL
OLTP OLAP
Traditional OLTP&OLAP â Data is generated inside of database system
Data
Creation
IoT/M2M use case â Data is generated outside of database system
Log
processing
BI Tools
BI Tools
Gateway Server
Data
Creation
Data
Creation
Many Devices
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second23
Data Importing becomes a heavy time-consuming operations for big-data processing.
Data
Import
Import!
24. Arrow_Fdw - maps Apache Arrow files as a foreign table
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second24
Apache Arrow
Files
Arrow_Fdw
Foreign Table
PostgreSQL
Tables
Not Importing
Write out
ï° Arrow_Fdw enables to map Apache Arrow files as a PostgreSQL foreign table.
ï° Not Importing, so Apache Arrow files are immediately visible.
25. Arrow_Fdw - maps Apache Arrow files as a foreign table
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second25
Apache Arrow
Files
Arrow_Fdw
Foreign Table
PostgreSQL
Tables
Write out
ï° Arrow_Fdw enables to map Apache Arrow files as a PostgreSQL foreign table.
ï° Not Importing, so Apache Arrow files are immediately visible.
ï° pg2arrow generates Apache Arrow file from query results of PostgreSQL.
pg2arrow
26. Arrow_Fdw - maps Apache Arrow files as a foreign table
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second26
Apache Arrow
Files
Arrow_Fdw
Foreign Table
PostgreSQL
Tables
Write out
ï° Arrow_Fdw enables to map Apache Arrow files as a PostgreSQL foreign table.
ï° Not Importing, so Apache Arrow files are immediately visible.
ï° pg2arrow generates Apache Arrow file from query results of PostgreSQL.
ï° Arrow_Fdw can be writable, but only batched-INSERT is supported.
pg2arrow
27. SSD-to-GPU Direct SQL on Arrow_Fdw (1/3)
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second27
âWhy Apache Arrow is beneficial?
ï° Less amount of I/O to be loaded; only referenced columns
ï° Higher utilization of GPU core; by vector processing and wide memory bus
ï° Read-only structure; No MVCC checks are required on run-time
It transfers ONLY Referenced Columns over SSD-to-GPU Direct SQL mechanism
PCIe Bus
NVMe SSD
GPU
SSD-to-GPU P2P DMA
WHERE-clause
JOIN
GROUP BY
P2P data transfer
only referenced columns
GPU code supports Apache
Arrow format as data source.
Runs SQL workloads in parallel
by thousands cores.
Write back the small results built
as heap-tuple of PostgreSQLResults
metadata
28. SSD-to-GPU Direct SQL on Arrow_Fdw (2/3) - Benchmark Results
ï° Query Execution Throughput = (6.0B rows) / (Query Response Time [sec])
ï° Combined usage of SSD-to-GPU Direct SQL and Columnar-store pulled out the extreme
performance; 130-500M rows per second
ï° Server configuration is identical to what we show on p.17. (1U, 1CPU, 1GPU, 3SSD)
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second28
16.3 16.0 16.0 15.7 15.5 15.6 15.6 15.9 15.7 15.7 15.5 15.6 15.4
55.0 55.1 55.1 54.4 54.4 54.6 52.9 54.0 54.9 54.8 53.2 53.1 54.0
481.0
494.2 499.6
246.6
261.7
282.0
156.2
179.0 183.6 185.0
138.3 139.1
152.6
0
100
200
300
400
500
Q1_1 Q1_2 Q1_3 Q2_1 Q2_2 Q2_3 Q3_1 Q3_2 Q3_3 Q3_4 Q4_1 Q4_2 Q4_3
Numberofrowsprocessedpersecond
[millionrows/sec]
Star Schema Benchmark (s=1000, DBsize: 879GB) on Tesla V100 x1 + DC P4600 x3
PostgreSQL 11.5 PG-Strom 2.2 (Row data) PG-Strom 2.2 + Arrow_Fdw
29. SSD-to-GPU Direct SQL on Arrow_Fdw (3/3) - Benchmark Validation
Foreign table "public.flineorder"
Column | Type | Size
--------------------+---------------+--------------------------------
lo_orderkey | numeric | 89.42GB
lo_linenumber | integer | 22.37GB
lo_custkey | numeric | 89.42GB
lo_partkey | integer | 22.37GB <-- â Referenced by Q2_1
lo_suppkey | numeric | 89.42GB <-- â Referenced by Q2_1
lo_orderdate | integer | 22.37GB <-- â Referenced by Q2_1
lo_orderpriority | character(15) | 83.82GB
lo_shippriority | character(1) | 5.7GB
lo_quantity | integer | 22.37GB
lo_extendedprice | integer | 22.37GB
lo_ordertotalprice | integer | 22.37GB
lo_discount | integer | 22.37GB
lo_revenue | integer | 22.37GB <-- â Referenced by Q2_1
lo_supplycost | integer | 22.37GB
lo_tax | integer | 22.37GB
lo_commit_date | character(8) | 44.71GB
lo_shipmode | character(10) | 55.88GB
FDW options: (file '/opt/nvme/lineorder_s401.arrow') ... file size = 310GB
âQ2_1 actuall reads only 157GB of 681GB (23.0%) from the NVME-SSD.
âExecution time of Q2_1 is 24.3s, so 157GB / 24.3s = 6.46GB/s
â Reasonable performance for 3x Intel DC P4600 on single CPU configuration.
Almost equivalent physical data transfer, but no need to load unreferenced columns
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second29
31. Log-data characteristics (2/2) â INSERT-only
â MVCC visibility check is (relatively) not significant.
â Rows with old timestamp will never inserted.
INSERT
UPDATE
DELETE
INSERT
UPDATE
DELETE
Transactional Data Log Data
with
timestamp
current
2019
2018
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second31
32. Configuration of PostgreSQL partition for log-data (1/2)
âMixture of PostgreSQL table and Arrow foreign table in partition declaration
âLog-data should have timestamp, and never updated
â Old data can be moved to Arrow foreign table for more efficient I/O
logdata_201912
logdata_202001
logdata_202002
logdata_current
logdata table
ïŒPARTITION BY timestampïŒ
2020-03-21
12:34:56
dev_id: 2345
signal: 12.4
Log-data with
timestamp
PostgreSQL tables
Row data store
Read-writable but slot
Arrow foreign table
Column data store
Read-only but fast
Unrelated child tables are skipped,
if query contains WHERE-clause on the timestamp
E.g) WHERE timestamp > â2020-01-23â
AND device_id = 456
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second32
33. Configuration of PostgreSQL partition for log-data (2/2)
logdata_201912
logdata_202001
logdata_202002
logdata_current
logdata table
ïŒPARTITION BY timestampïŒ
2019-03-21
12:34:56
dev_id: 2345
signal: 12.4
logdata_201912
logdata_202001
logdata_202002
logdata_202003
logdata_current
logdata table
ïŒPARTITION BY timestampïŒ
a month
later
Extract data of
2020-03
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second33
âMixture of PostgreSQL table and Arrow foreign table in partition declaration
âLog-data should have timestamp, and never updated
â Old data can be moved to Arrow foreign table for more efficient I/O
Log-data with
timestamp
34. Asymmetric Partition-wise JOIN (1/4)
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second34
lineorder
lineorder_p0
lineorder_p1
lineorder_p2
reminder=0
reminder=1
reminder=2
customer date
supplier parts
tablespace: nvme0
tablespace: nvme1
tablespace: nvme2
Records from partition-leafs must be backed to CPU and processed once!
Scan
Scan
Scan
Append
Join
Agg
Query
Results
Scan
Massive records
must be processed
on CPU first
35. Asymmetric Partition-wise JOIN (2/4)
postgres=# explain select * from ptable p, t1 where p.a = t1.aid;
QUERY PLAN
----------------------------------------------------------------------------------
Hash Join (cost=2.12..24658.62 rows=49950 width=49)
Hash Cond: (p.a = t1.aid)
-> Append (cost=0.00..20407.00 rows=1000000 width=12)
-> Seq Scan on ptable_p0 p (cost=0.00..5134.63 rows=333263 width=12)
-> Seq Scan on ptable_p1 p_1 (cost=0.00..5137.97 rows=333497 width=12)
-> Seq Scan on ptable_p2 p_2 (cost=0.00..5134.40 rows=333240 width=12)
-> Hash (cost=1.50..1.50 rows=50 width=37)
-> Seq Scan on t1 (cost=0.00..1.50 rows=50 width=37)
(8 rows)
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second35
36. Asymmetric Partition-wise JOIN (3/4)
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second36
lineorder
lineorder_p0
lineorder_p1
lineorder_p2
reminder=0
reminder=1
reminder=2
customer date
supplier parts
Push down JOIN/GROUP BY, even if smaller half is not a partitioned table.
Join
Append
Agg
Query
Results
Scan
Scan
PreAgg
Join
Scan
PreAgg
Join
Scan
PreAgg
tablespace: nvme0
tablespace: nvme1
tablespace: nvme2
Very small number of partial
results of JOIN/GROUP BY
shall be gathered on CPU.
37. Asymmetric Partition-wise JOIN (4/4)
postgres=# set enable_partitionwise_join = on;
SET
postgres=# explain select * from ptable p, t1 where p.a = t1.aid;
QUERY PLAN
----------------------------------------------------------------------------------
Append (cost=2.12..19912.62 rows=49950 width=49)
-> Hash Join (cost=2.12..6552.96 rows=16647 width=49)
Hash Cond: (p.a = t1.aid)
-> Seq Scan on ptable_p0 p (cost=0.00..5134.63 rows=333263 width=12)
-> Hash (cost=1.50..1.50 rows=50 width=37)
-> Seq Scan on t1 (cost=0.00..1.50 rows=50 width=37)
-> Hash Join (cost=2.12..6557.29 rows=16658 width=49)
Hash Cond: (p_1.a = t1.aid)
-> Seq Scan on ptable_p1 p_1 (cost=0.00..5137.97 rows=333497 width=12)
-> Hash (cost=1.50..1.50 rows=50 width=37)
-> Seq Scan on t1 (cost=0.00..1.50 rows=50 width=37)
-> Hash Join (cost=2.12..6552.62 rows=16645 width=49)
Hash Cond: (p_2.a = t1.aid)
-> Seq Scan on ptable_p2 p_2 (cost=0.00..5134.40 rows=333240 width=12)
-> Hash (cost=1.50..1.50 rows=50 width=37)
-> Seq Scan on t1 (cost=0.00..1.50 rows=50 width=37)
(16 rows)
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second37
38. PCIe-bus level optimization (1/3)
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second38
Supermicro
SYS-4029TRT2
x96 lane
PCIe
switch
x96 lane
PCIe
switch
CPU2 CPU1
QPI
Gen3
x16
Gen3 x16
for each
slot
Gen3 x16
for each
slotGen3
x16
âHPC Server â optimization for GPUDirect RDMA
âI/O Expansion Box
NEC ExpEther 40G
(4slots edition)
Network
Switch
4 slots of
PCIe Gen3 x8
PCIe
Swich
40Gb
Ethernet
CPU
NIC
Extra I/O Boxes
39. PCIe-bus level optimization (2/3)
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second39
By P2P DMA over PCIe-switch, major data traffic bypass CPU
CPU CPU
PCIe
switch
SSD GPU
PCIe
switch
SSD GPU
PCIe
switch
SSD GPU
PCIe
switch
SSD GPU
SCAN SCAN SCAN SCAN
JOIN JOIN JOIN JOIN
GROUP BY GROUP BY GROUP BY GROUP BY
Pre-processed Data (very small)
GATHER GATHER
42. Capable to process
100TB Log data
on a single-node
in reasonable time
100-120GB/s of effective
data transfer capability
by 4-unitsâ parallel
25-30GB/s of effective
data transfer capability by
columnar data structure
per unit
Large-scale Benchmark (1/4)
QPI
All of SSD-to-GPU Direct SQL, Apache Arrow and PCI-E bus optimization are used
CPU2CPU1RAM RAM
PCIe-SW PCIe-SW
NVME0
NVME1
NVME2
NVME3
NVME4
NVME5
NVME6
NVME7
NVME8
NVME9
NVME10
NVME11
NVME12
NVME13
NVME14
NVME15
GPU0 GPU1 GPU2 GPU3HBA0 HBA1 HBA2 HBA3
8.0-10GB/s of physical
data transfer capability
per GPU+4xSSD unit
42
PCIe-SW PCIe-SW
JBOF unit-0 JBOF unit-1
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second
43. Large-scale Benchmark (2/4)
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second43
Build HPC 4U Server + 4 of GPU + 16 of NVME-SSD configuration
CPU2 CPU1
SerialCables
dual PCI-ENC8G-08A
(U.2 NVME JBOF; 8slots) x2
NVIDIA
Tesla V100 x4
PCIe
switch
PCIe
switch
PCIe
switch
PCIe
switch
Gen3 x16
for each slot
Gen3 x16
for each slot
via SFF-8644 based PCIe x4 cables
(3.2GB/s x 16 = max 51.2GB/s)
Intel SSD
DC P4510 (1.0TB) x16
SerialCables
PCI-AD-x16HE x4
Supermicro
SYS-4029GP-TRT
SPECIAL THANKS
44. Large-scale Benchmark (3/4)
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second44
Build HPC 4U Server + 4 of GPU + 16 of NVME-SSD configuration
45. Large-scale Benchmark (4/4)
UPI
Distributed 3.5TB test data into 4 partitions; 879GB for each
CPU2CPU1RAM RAM
PCIe-SW PCIe-SW
NVME0
NVME1
NVME2
NVME3
NVME4
NVME5
NVME6
NVME7
NVME8
NVME9
NVME10
NVME11
NVME12
NVME13
NVME14
NVME15
GPU0 GPU1 GPU2 GPU3HBA0 HBA1 HBA2 HBA3
45
PCIe-SW PCIe-SW
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second
lineorder_p0
(879GB)
lineorder_p1
(879GB)
lineorder_p2
(879GB)
lineorder_p3
(879GB)
lineorder
(---GB)
Hash-Partition (N=4)
46. Benchmark Results (1/2)
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second46
Billion rows per second at a single-node PostgreSQL!!
64.0 65.0 65.2 58.8 58.0 67.3 43.9 58.7 61.0 60.8 52.9 53.0 59.1
255.2 255.6 256.9 257.6 258.0 257.5 248.5 254.9 253.3 253.3 249.3 249.4 249.6
1,681
1,714 1,719
1,058
1,090
1,127
753
801 816 812
658 664 675
0
200
400
600
800
1,000
1,200
1,400
1,600
1,800
Q1_1 Q1_2 Q1_3 Q2_1 Q2_2 Q2_3 Q3_1 Q3_2 Q3_3 Q3_4 Q4_1 Q4_2 Q4_3
Numberofrowsprocessedpersecond
[millionrows/sec]
Results of Star Schema Benchmark
[SF=4,000 (3.5TB; 24B rows), 4xGPU, 16xNVME-SSD]
PostgreSQL 11.5 PG-Strom 2.2 PG-Strom 2.2 + Arrow_Fdw
47. Benchmark Results (2/2)
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second47
40GB/s Read from NVME-SSDs during SQL execution (95% of H/W limit)
0
5,000
10,000
15,000
20,000
25,000
30,000
35,000
40,000
0 20 40 60 80 100 120 140 160 180 200 220 240 260 280 300 320 340
NVME-SSDReadThroughput[MB/s]
Elapsed Time [sec]
/dev/md0 (GPU0) /dev/md1 (GPU1) /dev/md2 (GPU2) /dev/md3 (GPU3)
48. PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second48
Future Direction
49. NVIDIA GPUDirect Storage (cuFile API) support
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second49
âNVIDIA GPUDirect Storage?
ï° API & Driver stack for direct read from NVME to GPU
â Features are almost equivalend to NVME-Strom
â Ubuntu 18.04 & RHEL8/CentOS8 shall be released at the initial release
ï° NVIDIA will officially release the software in 2020.
âWhy beneficial?
ï° Linux kernel driver that is tested/evaluated by NVIDIAâs QA process.
ï° Involvement of broader ecosystem with third-party solutions;
like distributed-filesystem, block storage, software defined storage
ï° Broader OS support; Ubuntu 18.04.
PG-Strom is here
Ref: GPUDIRECT STORAGE: A DIRECT GPU-STORAGE DATA PATH
https://on-demand.gputechconf.com/supercomputing/2019/pdf/sc1922-gpudirect-storage-transfer-data-directly-to-gpu-memory-alleviating-io-bottlenecks.pdf
WIP
51. PostGIS Functions & Operator support (2/2)
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second51
âCurrent status
ï° geometry st_makepoint(float8,float8,...)
ï° float8 st_distance(geometry,geometry)
ï° bool st_dwithin(geometry,geometry,float8)
ï° bool st_contains(geometry,geometry)
ï° bool st_crosses(geometry,geometry)
âNext: GiST (R-tree) Index support
ï° Index-based nested-loop up to million polygon x billion points.
ï° Runs index-search by GPUâs thousands threads in parallel.
â Because of GiST internal structure, GPU-parallel is likely efficient.
WIP
GiST(R-tree) Index
Polygon-definition
Table with
Location data
Index search
by thousands
threads
in parallel
52. Resources
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second52
âRepository
ï° https://github.com/heterodb/pg-strom
ï° https://heterodb.github.io/swdc/
âDocument
ï° http://heterodb.github.io/pg-strom/
âContact
ï° kaigai@heterodb.com
ï° Tw: @kkaigai / @heterodb
55. Apache Arrow Internal (1/3)
âApache Arrow Internal
ï° Header
âą Fixed byte string: âARROW1„0„0â
ï° Schema Definition
âą # of columns and columns definitions.
âą Data type/precision, column name, column number, ...
ï° Record Batch
âą Internal data block that contains a certain # of rows.
âą E.g, if N=1M rows with (Int32, Float64), this record-batch
begins from 1M of Int32 array, then 1M of Float64 array
follows.
ï° Dictionary Batch
âą Optional area for dictionary compression.
âą E.g) 1 = âTokyoâ, 2 = âOsakaâ, 3 = âKyotoâ
â Int32 representation for frequently appearing words
ï° Footer
âą Metadata - offset/length of RecordBatches and
DictionaryBatches
Header âARROW1„0„0â
Schema Definition
DictionaryBatch-0
RecordBatch-0
RecordBatch-k
Footer
âą DictionaryBatch[0] (offset, size)
âą RecordBatch[0] (offset, size)
:
âą RecordBatch[k] (offset, size)
Apache Arrow file
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second55
56. Apache Arrow Internal (2/3) - writable Arrow_Fdw
Header âARROW1„0„0â
Schema Definition
DictionaryBatch-0
RecordBatch-0
RecordBatch-k
Footer
âą DictionaryBatch[0]
âą RecordBatch[0]
:
âą RecordBatch[k]
Arrow file (before writes)
Header âARROW1„0„0â
Schema Definition
DictionaryBatch-0
RecordBatch-0
RecordBatch-k
Footer (new revision)
âą DictionaryBatch[0]
âą RecordBatch[0]
:
âą RecordBatch[k]
âą RecordBatch[k+1]
Arrow file (after writes)
RecordBatch-(k+1)
Overwrite of
the original
Footer area
Rows written by
a single INSERT
command
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second56
57. Apache Arrow Internal (3/3) - writable Arrow_Fdw
Header âARROW1„0„0â
Schema Definition
DictionaryBatch-0
RecordBatch-0
RecordBatch-k
Footer
âą DictionaryBatch[0]
âą RecordBatch[0]
:
âą RecordBatch[k]
Arrow file (before writes)
Header âARROW1„0„0â
Schema Definition
DictionaryBatch-0
RecordBatch-0
RecordBatch-k
Footer (new revision)
âą DictionaryBatch[0]
âą RecordBatch[0]
:
âą RecordBatch[k]
âą RecordBatch[k+1]
Arrow file (after writes)
RecordBatch-(k+1)
Overwrite of
the original
Footer area
Footer
âą DictionaryBatch[0]
âą RecordBatch[0]
:
âą RecordBatch[k]
Arrow_Fdw keeps the original
Footer image until commit,
for support of rollback.
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second57
58. Pg2Arrow / MySQL2Arrow
$ pg2arrow --help
Usage:
pg2arrow [OPTION] [database] [username]
General options:
-d, --dbname=DBNAME Database name to connect to
-c, --command=COMMAND SQL command to run
-t, --table=TABLENAME Table name to be dumped
(-c and -t are exclusive, either of them must be given)
-o, --output=FILENAME result file in Apache Arrow format
--append=FILENAME result Apache Arrow file to be appended
(--output and --append are exclusive. If neither of them
are given, it creates a temporary file.)
Arrow format options:
-s, --segment-size=SIZE size of record batch for each
Connection options:
-h, --host=HOSTNAME database server host
-p, --port=PORT database server port
-u, --user=USERNAME database user name
-w, --no-password never prompt for password
-W, --password force password prompt
Other options:
--dump=FILENAME dump information of arrow file
--progress shows progress of the job
--set=NAME:VALUE config option to set before SQL execution
--help shows this message
â mysql2arrow also has almost equivalent capability
Pg2Arrow / MySQL2Arrow enables to dump SQL results as Arrow file
Apache Arrow
Data Files
Arrow_Fdw
Pg2Arrow
PostgreSQL Webinar Series 2020 - Data processing more than billion rows per second58