SlideShare uma empresa Scribd logo
1 de 50
Baixar para ler offline
Full-throttle running on Terabytes log-data
HeteroDB,Inc
Chief Architect & CEO
KaiGai Kohei <kaigai@heterodb.com>
Self-Introduction
 KaiGai Kohei (海外浩平)
 Chief Architect & CEO of HeteroDB
 Contributor of PostgreSQL (2006-)
 Primary Developer of PG-Strom (2012-)
 Interested in: Big-data, GPU, NVME/PMEM, ...
about myself
about our company
 Established: 4th-Jul-2017
 Location: Shinagawa, Tokyo, Japan
 Businesses:
✓ Sales & development of high-performance data-processing
software on top of heterogeneous architecture.
✓ Technology consulting service on GPU&DB area. PG-Strom
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data3
What is PG-Strom?
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data4
 Transparent GPU acceleration for analytics and reporting workloads
 Binary code generation by JIT from SQL statements
 PCIe-bus level optimization by SSD-to-GPU Direct SQL technology
 Columnar-storage for efficient I/O and vector processors
PG-Strom is an extension of PostgreSQL for terabytes scale data-processing
and inter-operation of AI/ML, by utilization of GPU and NVME-SSD.
App
GPU
off-loading
for IoT/Big-Data
for ML/Analytics
➢ SSD-to-GPU Direct SQL
➢ Columnar Store (Arrow_Fdw)
➢ Asymmetric Partition-wise
JOIN/GROUP BY
➢ BRIN-Index Support
➢ NVME-over-Fabric Support
➢ Procedural Language for
GPU native code (PL/CUDA)
➢ NVIDIA RAPIDS data frame
support (WIP)
➢ IPC on GPU device memory
PG-Strom as an open source project
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data5
Official documentation in English / Japanese
http://heterodb.github.io/pg-strom/
Many stars ☺
Distributed under GPL-v2.0
Has been developed since 2012
▌Supported PostgreSQL versions
 PostgreSQL v11.x, v10.x and v9.6.x
▌Related contributions to PostgreSQL
 Writable FDW support (9.3)
 Custom-Scan interface (9.5)
 FDW and Custom-Scan JOIN pushdown (9.5)
▌Talks at PostgreSQL community
 PGconf.EU 2012 (Prague), 2015 (Vienna), 2018 (Lisbon)
 PGconf.ASIA 2018 (Tokyo), 2019 (Bali, Indonesia)
 PGconf.SV 2016 (San Francisco)
 PGconf.China 2015 (Beijing)
Terabytes scale log-data
processing on PostgreSQL
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data6
System image of log-data platform
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data7
Manufacturing Logistics Mobile Home electronics
 Any kind of devices generate various form of log-data.
 People want/try to find out insight from the data.
 Data importing and processing must be rapid.
 System administration should be simple and easy.
JBoF: Just Bunch of Flash
NVME over
Fabric
(RDMA)
DB Admin
BI Tools
Machine-learning
applications
e.g, anomaly detection
shared
data-frame
PG-Strom
Why elephant is cobalt blue, not yellow.
▌Here is PGconf.ASIA 2019 :-)
▌Standalone system is much simpler than multi-node cluster system.
▌Engineers are familiar with PostgreSQL for more than 10 years
▌How many users actually have petabytes scale data?
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data8
How much data size we expect? (1/3)
▌This is a factory of an international top-brand company.
▌Most of users don’t have bigger data than them.
Case: A semiconductor factory managed up to 100TB with PostgreSQL
Total 20TB
Per Year
Kept for
5 years
Defect
Investigation
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data9
How much data size we expect? (2/3)
Nowadays, 2U server is capable for 100TB capacity
model Supermicro 2029U-TN24R4T Qty
CPU Intel Xeon Gold 6226 (12C, 2.7GHz) 2
RAM 32GB RDIMM (DDR4-2933, ECC) 12
GPU NVIDIA Tesla P40 (3840C, 24GB) 2
HDD Seagate 1.0TB SATA (7.2krpm) 1
NVME Intel DC P4510 (8.0TB, U.2) 24
N/W built-in 10GBase-T 4
8.0TB x 24 = 192TB
How much is it?
60,932USD
by thinkmate.com
(31st-Aug-2019)
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data10
How much data size we expect? (3/3)
On the other hands, it is not small enough.
TB
TB
TB
B
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data11
Weapons to tackle terabytes scale data
• SSD-to-GPU Direct SQL
• Direct data transfer pulls out maximum
performance of NVME for SQL workloads
Efficient
Storage
• Arrow_Fdw
• Columnar store that is optimal for both
of I/O throughput and vector processors
Efficient
Data Structure
• PostgreSQL Partitioning &
Asymmetric Partition-wise JOIN
• Prune the range obviously unreferenced
Partitioning
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data12
PG-Strom: SSD-to-GPU Direct SQL
Efficient Storage
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data13
GPU’s characteristics - mostly as a computing accelerator
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data14
Over 10years history in HPC, then massive popularization in Machine-Learning
NVIDIA Tesla V100
Super Computer
(TITEC; TSUBAME3.0) Computer Graphics Machine-Learning
Today’s Topic
How I/O workloads are accelerated by GPU that is a computing accelerator?
Simulation
A usual composition of x86_64 server
GPUSSD
CPU
RAM
HDD
N/W
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data15
Data flow to process a massive amount of data
CPU RAM
SSD GPU
PCIe
PostgreSQL
Data Blocks
Normal Data Flow
All the records, including junks, must be loaded
onto RAM once, because software cannot check
necessity of the rows prior to the data loading.
So, amount of the I/O traffic over PCIe bus tends
to be large.
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data16
Unless records are not loaded to CPU/RAM once, over the PCIe bus,
software cannot check its necessity even if they are “junk” records.
SSD-to-GPU Direct SQL (1/4) – Overview
CPU RAM
SSD GPU
PCIe
PostgreSQL
Data Blocks
NVIDIA GPUDirect RDMA
It allows to load the data blocks on NVME-SSD
to GPU using peer-to-peer DMA over PCIe-bus;
bypassing CPU/RAM. WHERE-clause
JOIN
GROUP BY
Run SQL by GPU
to reduce the data size
Data Size: Small
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data17
v2.0
Background - GPUDirect RDMA by NVIDIA
Physical
address space
PCIe BAR1 Area
GPU
device
memory
RAM
NVMe-SSD Infiniband
HBA
PCIe devices
GPUDirect RDMA
It enables to map GPU device
memory on physical address
space of the host system
Once “physical address of GPU device memory”
appears, we can use is as source or destination
address of DMA with PCIe devices.
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data18
0xf0000000
0xe0000000
DMA Request
SRC: 1200th sector
LEN: 40 sectors
DST: 0xe0200000
SSD-to-GPU Direct SQL (2/4) – System configuration and workloads
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data19
Supermicro SYS-1019GP-TT
CPU Xeon Gold 6126T (2.6GHz, 12C) x1
RAM 192GB (32GB DDR4-2666 x 6)
GPU NVIDIA Tesla V100 (5120C, 16GB) x1
SSD
Intel SSD DC P4600 (HHHL; 2.0TB) x3
(striping configuration by md-raid0)
HDD 2.0TB(SATA; 72krpm) x6
Network 10Gb Ethernet 2ports
OS
Red Hat Enterprise Linux 7.6
CUDA 10.1 + NVIDIA Driver 418.40.04
DB
PostgreSQL v11.2
PG-Strom v2.2devel
■ Query Example (Q2_3)
SELECT sum(lo_revenue), d_year, p_brand1
FROM lineorder, date1, part, supplier
WHERE lo_orderdate = d_datekey
AND lo_partkey = p_partkey
AND lo_suppkey = s_suppkey
AND p_category = 'MFGR#12‘
AND s_region = 'AMERICA‘
GROUP BY d_year, p_brand1
ORDER BY d_year, p_brand1;
customer
12M rows
(1.6GB)
date1
2.5K rows
(400KB)
part
1.8M rows
(206MB)
supplier
4.0M rows
(528MB)
lineorder
2.4B rows
(351GB)
Summarizing queries for typical Star-Schema structure on simple 1U server
SSD-to-GPU Direct SQL (3/4) – Benchmark results
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data20
 Query Execution Throughput = (353GB; DB-size) / (Query response time [sec])
 SSD-to-GPU Direct SQL runs the workloads close to the hardware limitation (8.5GB/s)
 about x3 times faster than filesystem and CPU based mechanism
2343.7 2320.6 2315.8 2233.1 2223.9 2216.8 2167.8 2281.9 2277.6 2275.9 2172.0 2198.8 2233.3
7880.4 7924.7 7912.5
7725.8 7712.6 7827.2
7398.1 7258.0
7638.5 7750.4
7402.8 7419.6
7639.8
0
1,000
2,000
3,000
4,000
5,000
6,000
7,000
8,000
9,000
Q1_1 Q1_2 Q1_3 Q2_1 Q2_2 Q2_3 Q3_1 Q3_2 Q3_3 Q3_4 Q4_1 Q4_2 Q4_3
QueryExecutionThroughput[MB/s]
Star Schema Benchmark on 1xNVIDIA Tesla V100 + 3xIntel DC P4600
PostgreSQL 11.2 PG-Strom 2.2devel
SSD-to-GPU Direct SQL (4/4) – Software Stack
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data21
Filesystem
(ext4, xfs)
nvme_strom
kernel module
NVMe SSD
PostgreSQL
pg_strom
extension
read(2) ioctl(2)
Operating
System
Software
Layer
Database
Software
Layer
blk-mq
nvme pcie nvme rdma
Network HBA
NVMe
Request
Network HBANVMe SSD
NVME-over-Fabric Target
RDMA over
Converged
Ethernet
GPUDirect RDMA
Translation from logical location
to physical location on the drive
■ Other software
■ Software by HeteroDB
■ Hardware
Special NVME READ command that
loads the source blocks on NVME-
SSD to GPU’s device memory.
External
Storage
Server
Local
Storage
Layer
PG-Strom: Arrow_Fdw (Columnar Store)
Efficient Data Structure
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data22
Background: Apache Arrow (1/2)
▌Characteristics
 Column-oriented data format designed for analytics workloads
 A common data format for inter-application exchange
 Various primitive data types like integer, floating-point, date/time and so on
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data23
PostgreSQL / PG-Strom
NVIDIA GPU
Background: Apache Arrow (2/2)
Apache Arrow Data Types PostgreSQL Data Types extra description
Int int2, int4, int8
FloatingPoint float2, float4, float8 float2 is an enhancement of PG-Strom
Binary bytea
Utf8 text
Bool bool
Decimal numeric
Date date adjusted to unitsz = Day
Time time adjusted to unitsz = MicroSecond
Timestamp timestamp adjusted to unitsz = MicroSecond
Interval interval
List array types Only 1-dimensional array is supportable
Struct composite types
Union ------
FixedSizeBinary char(n)
FixedSizeList ------
Map ------
Most of data types are convertible between Apache Arrow and PostgreSQL.
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data24
Log-data characteristics (1/2) – WHERE is it generated on?
ETL
OLTP OLAP
Traditional OLTP&OLAP – Data is generated inside of database system
Data
Creation
IoT/M2M use case – Data is generated outside of database system
Log
processing
BI Tools
BI Tools
Gateway Server
Data
Creation
Data
Creation
Many Devices
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data25
Data Importing becomes a heavy time-consuming operations for big-data processing.
Data
Import
Import!
Background – FDW (Foreign Data Wrapper)
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data26
 FDW module is responsible to transform external data into PostgreSQL internal data.
 In case of Arrow_Fdw, it maps Arrow-files on the filesystem as foreign-table.
➔ Just mapping, so no need to import the external data again.
Foreign Table – it allows to read (and potentially write) external data source
as like normal PostgreSQL tables, for SQL commands.
PostgreSQL
Table
Foreign Table
postgres_fdw
Foreign Table
file_fdw
Foreign Table
twitter_fdw
Foreign Table
Arrow_fdw
External RDBMS
CSV Files
Twitter (Web API)
Arrow Files
SSD-to-GPU Direct SQL on Arrow_Fdw (1/3)
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data27
▌Why Apache Arrow is beneficial?
 Less amount of I/O to be loaded; only referenced columns
 Higher utilization of GPU core; by vector processing and wide memory bus
 Read-only structure; No MVCC checks are required on run-time
It transfers ONLY Referenced Columns over SSD-to-GPU Direct SQL mechanism
PCIe Bus
NVMe SSD
GPU
SSD-to-GPU P2P DMA
WHERE-clause
JOIN
GROUP BY
P2P data transfer
only referenced columns
GPU code supports Apache
Arrow format as data source.
Runs SQL workloads in parallel
by thousands cores.
Write back the small results built
as heap-tuple of PostgreSQLResults
metadata
SSD-to-GPU Direct SQL on Arrow_Fdw (2/3) – Benchmark results
 Query Execution Throughput = (353GB; DB or 310GB; arrow) / (Query response time[s])
 Combined use of SSD-to-GPU Direct SQL and Columnar-store pulled out 15GB-49GB/s
query execution throughput according to the number of referenced columns.
 See p.10 for the server configuration; just 1U-rackserver with 1 CPU+1 GPU+3 SSD
2343.7 2320.6 2315.8 2233.1 2223.9 2216.8 2167.8 2281.9 2277.6 2275.9 2172.0 2198.8 2233.3
7880.4 7924.7 7912.5 7725.8 7712.6 7827.2 7398.1 7258.0 7638.5 7750.4 7402.8 7419.6 7639.8
49038
27108
29171 28699
27876
29969
19166
20953 20890
22395
15919 15925
17061
0
5,000
10,000
15,000
20,000
25,000
30,000
35,000
40,000
45,000
50,000
Q1_1 Q1_2 Q1_3 Q2_1 Q2_2 Q2_3 Q3_1 Q3_2 Q3_3 Q3_4 Q4_1 Q4_2 Q4_3
QueryExecutionThroughput[MB/s]
Star Schema Benchmark on 1xNVIDIA Tesla V100 + 3xIntel DC P4600
PostgreSQL 11.2 PG-Strom 2.2devel PG-Strom 2.2devel + Arrow_Fdw
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data28
SSD-to-GPU Direct SQL on Arrow_Fdw (3/3) – Validation of the results
Foreign table "public.flineorder"
Column | Type | Size
--------------------+---------------+--------------------------------
lo_orderkey | numeric | 35.86GB
lo_linenumber | integer | 8.96GB
lo_custkey | numeric | 35.86GB
lo_partkey | integer | 8.96GB
lo_suppkey | numeric | 35.86GB <-- ★Referenced by Q2_1
lo_orderdate | integer | 8.96GB <-- ★Referenced by Q2_1
lo_orderpriority | character(15) | 33.61GB <-- ★Referenced by Q2_1
lo_shippriority | character(1) | 2.23GB
lo_quantity | integer | 8.96GB
lo_extendedprice | bigint | 17.93GB
lo_ordertotalprice | bigint | 17.93GB
lo_discount | integer | 8.96GB
lo_revenue | bigint | 17.93GB
lo_supplycost | bigint | 17.93GB <-- ★Referenced by Q2_1
lo_tax | integer | 8.96GB
lo_commit_date | character(8) | 17.93GB
lo_shipmode | character(10) | 22.41GB
FDW options: (file '/opt/nvme/lineorder_s401.arrow') ... file size = 310GB
▌Only 96.4GB of 310GB was actually read from NVME-SSD (31.08%)
▌Execution time of Q2_1 is 11.06s, so 96.4GB / 11.06s = 8.7GB/s
➔ It is a reasonable performance for 3x Intel DC P4600 on single CPU configuration
Almost equivalent raw data transfer, but no need to copy unreferenced columns.
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data29
Generation of Apache Arrow files
✓ Simply, specifies the SQL to run by “-c” and Arrow-
file to be written by “-o” option.
$./pg2arrow -h
Usage:
pg2arrow [OPTION]... [DBNAME [USERNAME]]
General options:
-d, --dbname=DBNAME database name to connect to
-c, --command=COMMAND SQL command to run
-f, --file=FILENAME SQL command from file
-o, --output=FILENAME result file in Apache Arrow format
Arrow format options:
-s, --segment-size=SIZE size of record batch for each
(default is 256MB)
Connection options:
-h, --host=HOSTNAME database server host
-p, --port=PORT database server port
-U, --username=USERNAME database user name
-w, --no-password never prompt for password
-W, --password force password prompt
Debug options:
--dump=FILENAME dump information of arrow file
--progress shows progress of the job.
Pg2Arrow command saves SQL results in Arrow-format.
Apache Arrow
Data Files
Arrow_Fdw
Pg2Arrow
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data30
PostgreSQL Partitioning and
PCIe-bus level optimization
Partitioning
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data31
Log-data characteristics (2/2) – INSERT-only
✓ MVCC visibility check is (relatively) not significant.
✓ Rows with old timestamp will never inserted.
INSERT
UPDATE
DELETE
INSERT
UPDATE
DELETE
Transactional Data Log Data
with
timestamp
current
2018
2017
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data32
Configuration of PostgreSQL partition for log-data (1/2)
▌Mixture of PostgreSQL table and Arrow foreign table in partition declaration
▌Log-data should have timestamp, and never updated
➔ Old data can be moved to Arrow foreign table for more efficient I/O
logdata_201812
logdata_201901
logdata_201902
logdata_current
logdata table
(PARTITION BY timestamp)
2019-03-21
12:34:56
dev_id: 2345
signal: 12.4
Log-data with
timestamp
PostgreSQL tables
Row data store
Read-writable but slot
Arrow foreign table
Column data store
Read-only but fast
Unrelated child tables are skipped,
if query contains WHERE-clause on the timestamp
E.g) WHERE timestamp > ‘2019-01-23’
AND device_id = 456
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data33
Configuration of PostgreSQL partition for log-data (2/2)
logdata_201812
logdata_201901
logdata_201902
logdata_current
logdata table
(PARTITION BY timestamp)
2019-03-21
12:34:56
dev_id: 2345
signal: 12.4
logdata_201812
logdata_201901
logdata_201902
logdata_201903
logdata_current
logdata table
(PARTITION BY timestamp)
a month
later
Extract data of
2019-03
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data34
▌Mixture of PostgreSQL table and Arrow foreign table in partition declaration
▌Log-data should have timestamp, and never updated
➔ Old data can be moved to Arrow foreign table for more efficient I/O
Log-data with
timestamp
PCIe-bus level optimization (1/3)
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data35
Physical data distribution, and utilization of closest GPU
CPU CPU
PCIe
switch
SSD GPU
PCIe
switch
SSD GPU
PCIe
switch
SSD GPU
PCIe
switch
SSD GPU
logdata
201812
logdata
201901
logdata
201902
logdata
201903
Near is Better
Far is Worse
PCIe-bus level optimization (2/3)
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data36
By P2P DMA over PCIe-switch, major data traffic bypass CPU
CPU CPU
PCIe
switch
SSD GPU
PCIe
switch
SSD GPU
PCIe
switch
SSD GPU
PCIe
switch
SSD GPU
SCAN SCAN SCAN SCAN
JOIN JOIN JOIN JOIN
GROUP BY GROUP BY GROUP BY GROUP BY
Pre-processed Data (very small)
GATHER GATHER
PCIe-bus level optimization (3/3)
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data37
Supermicro
SYS-4029TRT2
x96 lane
PCIe
switch
x96 lane
PCIe
switch
CPU2 CPU1
QPI
Gen3
x16
Gen3 x16
for each
slot
Gen3 x16
for each
slotGen3
x16
▌HPC Server – optimization for GPUDirect RDMA
▌I/O Expansion Box
NEC ExpEther 40G
(4slots edition)
Network
Switch
4 slots of
PCIe Gen3 x8
PCIe
Swich
40Gb
Ethernet
CPU
NIC
Extra I/O Boxes
Special Optimization – Combined GPU Kernel
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data38
Reduction of “Data Ping-Pong” is a key of performance
GpuScan
kernel
GpuJoin
kernel
GpuPreAgg
kernel
GPU
CPU
Storage
GPU
Buffer
GPU
Buffer
results
DMA
Buffer
Agg
(PostgreSQL)
Combined GPU kernel for SCAN + JOIN + GROUP BY
data size
= Large
data size
= Small
DMA
Buffer
SSD-to-GPU
Direct SQL
DMA
Buffer
No data transfer ping-pong
Asymmetric Partition-wise JOIN (1/5)
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data39
lineorder
lineorder_p0
lineorder_p1
lineorder_p2
reminder=0
reminder=1
reminder=2
customer date
supplier parts
tablespace: nvme0
tablespace: nvme1
tablespace: nvme2
Records from partition-leafs must be backed to CPU and processed once!
Scan
Scan
Scan
Append
Join
Agg
Query
Results
Scan
Massive records
must be processed
on CPU
Asymmetric Partition-wise JOIN (2/5)
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data40
lineorder
lineorder_p0
lineorder_p1
lineorder_p2
reminder=0
reminder=1
reminder=2
customer date
supplier parts
Push down JOIN/GROUP BY, even if smaller half is not a partitioned table.
Join
Append
Agg
Query
Results
Scan
Scan
PreAgg
Join
Scan
PreAgg
Join
Scan
PreAgg
tablespace: nvme0
tablespace: nvme1
tablespace: nvme2
Very small number of partial
results of JOIN/GROUP BY
shall be gathered on CPU.
※ This feature is now under proposition
for the PostgreSQL 13 commit fest.
Asymmetric Partition-wise JOIN (3/5)
postgres=# explain select * from ptable p, t1 where p.a = t1.aid;
QUERY PLAN
----------------------------------------------------------------------------------
Hash Join (cost=2.12..24658.62 rows=49950 width=49)
Hash Cond: (p.a = t1.aid)
-> Append (cost=0.00..20407.00 rows=1000000 width=12)
-> Seq Scan on ptable_p0 p (cost=0.00..5134.63 rows=333263 width=12)
-> Seq Scan on ptable_p1 p_1 (cost=0.00..5137.97 rows=333497 width=12)
-> Seq Scan on ptable_p2 p_2 (cost=0.00..5134.40 rows=333240 width=12)
-> Hash (cost=1.50..1.50 rows=50 width=37)
-> Seq Scan on t1 (cost=0.00..1.50 rows=50 width=37)
(8 rows)
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data41
Asymmetric Partition-wise JOIN (4/5)
postgres=# set enable_partitionwise_join = on;
SET
postgres=# explain select * from ptable p, t1 where p.a = t1.aid;
QUERY PLAN
----------------------------------------------------------------------------------
Append (cost=2.12..19912.62 rows=49950 width=49)
-> Hash Join (cost=2.12..6552.96 rows=16647 width=49)
Hash Cond: (p.a = t1.aid)
-> Seq Scan on ptable_p0 p (cost=0.00..5134.63 rows=333263 width=12)
-> Hash (cost=1.50..1.50 rows=50 width=37)
-> Seq Scan on t1 (cost=0.00..1.50 rows=50 width=37)
-> Hash Join (cost=2.12..6557.29 rows=16658 width=49)
Hash Cond: (p_1.a = t1.aid)
-> Seq Scan on ptable_p1 p_1 (cost=0.00..5137.97 rows=333497 width=12)
-> Hash (cost=1.50..1.50 rows=50 width=37)
-> Seq Scan on t1 (cost=0.00..1.50 rows=50 width=37)
-> Hash Join (cost=2.12..6552.62 rows=16645 width=49)
Hash Cond: (p_2.a = t1.aid)
-> Seq Scan on ptable_p2 p_2 (cost=0.00..5134.40 rows=333240 width=12)
-> Hash (cost=1.50..1.50 rows=50 width=37)
-> Seq Scan on t1 (cost=0.00..1.50 rows=50 width=37)
(16 rows)
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data42
Asymmetric Partition-wise JOIN (5/5)
May be available
at PostgreSQL v13
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data43
(Near) Future works & Conclusion
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data44
Benchmarks on HPC Server (1/2)
CPU2 CPU1
SerialCables
dual PCI-ENC8G-08A
(U.2 NVME JBOF; 8slots) x2
NVIDIA
Tesla V100 x4
45
PCIe
switch
PCIe
switch
PCIe
switch
PCIe
switch
Gen3 x16
for each slot
Gen3 x16
for each slot
via SFF-8644 based PCIe x4 cables
(3.2GB/s x 16 = max 51.2GB/s)
Intel SSD
DC P4510 (1.0TB) x16
SerialCables
PCI-AD-x16HE x4
WIP
Now facilities are
under arrangement.
We shall have
benchmark project
at Sep/Oct-2019.
Supermicro
SYS-4029GP-TRT
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data
Capable to process
up to 100TB grade
log-data in single node
By 4 units parallel execution,
100 - 120GB/s effective data
processing capability
By columnar storage,
25 - 30GB/s effective data
transfer per unit
Benchmarks on HPC Server (2/2)
QPI
Towards effective 100GB/s performance by PCIe bus optimization and columnar-storage
CPU2CPU1RAM RAM
PCIe-SW PCIe-SW
NVME0
NVME1
NVME2
NVME3
NVME4
NVME5
NVME6
NVME7
NVME8
NVME9
NVME10
NVME11
NVME12
NVME13
NVME14
NVME15
GPU0 GPU1 GPU2 GPU3HBA0 HBA1 HBA2 HBA3
8.0 - 10GB/s physical data
transfer performance
per (GPU + 4 SSD) unit
46
PCIe-SW PCIe-SW
JBOF unit-0 JBOF unit-1
WIP
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data
Future works
▌Asymmetric Partition-wise JOIN (PostgreSQL v13)
▌MVCC checks on GPU device
▌Data-frame exchange over GPU device memory (NVIDIA RAPIDS)
▌RHEL8 support (Linux kernel driver)
▌DRAM access reductions on GpuJoin
▌More reliable statistics framework
▌Enlargement of regression test cases
▌……and so on
PG-Strom development team welcomes your volunteership!
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data47
Conclusion
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data48
▌Characteristics of Log-data
 INSERT-only
 Must be imported once
 Always have timestamp
▌Weapons we can use
 SSD-to-GPU Direct SQL
 Arrow_Fdw
 PostgreSQL Partitioning
 PCIe-bug level optimization
▌Why PostgreSQL?
 Standalone system is much simple and inexpensive than cluster.
 Engineers have been familiar with PostgreSQL over 10 years.
Full utilization of H/W, optimized data structure, and proper partitioning
enable terabytes scale data-processing on a standalone PostgreSQL system.
Resources
PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data49
▌Repository
 https://github.com/heterodb/pg-strom
 https://heterodb.github.io/swdc/
▌Documents
 http://heterodb.github.io/pg-strom/
▌Contact
 kaigai@heterodb.com
 Tw: @kkaigai
Full-throttle running on Terabytes log-data with PG-Strom

Mais conteúdo relacionado

Mais procurados

GPU/SSD Accelerates PostgreSQL - challenge towards query processing throughpu...
GPU/SSD Accelerates PostgreSQL - challenge towards query processing throughpu...GPU/SSD Accelerates PostgreSQL - challenge towards query processing throughpu...
GPU/SSD Accelerates PostgreSQL - challenge towards query processing throughpu...Kohei KaiGai
 
PG-Strom v2.0 Technical Brief (17-Apr-2018)
PG-Strom v2.0 Technical Brief (17-Apr-2018)PG-Strom v2.0 Technical Brief (17-Apr-2018)
PG-Strom v2.0 Technical Brief (17-Apr-2018)Kohei KaiGai
 
GPU Accelerated Data Science with RAPIDS - ODSC West 2020
GPU Accelerated Data Science with RAPIDS - ODSC West 2020GPU Accelerated Data Science with RAPIDS - ODSC West 2020
GPU Accelerated Data Science with RAPIDS - ODSC West 2020John Zedlewski
 
20171206 PGconf.ASIA LT gstore_fdw
20171206 PGconf.ASIA LT gstore_fdw20171206 PGconf.ASIA LT gstore_fdw
20171206 PGconf.ASIA LT gstore_fdwKohei KaiGai
 
Demystifying DataFrame and Dataset
Demystifying DataFrame and DatasetDemystifying DataFrame and Dataset
Demystifying DataFrame and DatasetKazuaki Ishizaki
 
Gruter_TECHDAY_2014_04_TajoCloudHandsOn (in Korean)
Gruter_TECHDAY_2014_04_TajoCloudHandsOn (in Korean)Gruter_TECHDAY_2014_04_TajoCloudHandsOn (in Korean)
Gruter_TECHDAY_2014_04_TajoCloudHandsOn (in Korean)Gruter
 
Hadoop World 2011: The Powerful Marriage of R and Hadoop - David Champagne, R...
Hadoop World 2011: The Powerful Marriage of R and Hadoop - David Champagne, R...Hadoop World 2011: The Powerful Marriage of R and Hadoop - David Champagne, R...
Hadoop World 2011: The Powerful Marriage of R and Hadoop - David Champagne, R...Cloudera, Inc.
 
Japan Lustre User Group 2014
Japan Lustre User Group 2014Japan Lustre User Group 2014
Japan Lustre User Group 2014Hitoshi Sato
 
PL/CUDA - Fusion of HPC Grade Power with In-Database Analytics
PL/CUDA - Fusion of HPC Grade Power with In-Database AnalyticsPL/CUDA - Fusion of HPC Grade Power with In-Database Analytics
PL/CUDA - Fusion of HPC Grade Power with In-Database AnalyticsKohei KaiGai
 
Supermicro High Performance Enterprise Hadoop Infrastructure
Supermicro High Performance Enterprise Hadoop InfrastructureSupermicro High Performance Enterprise Hadoop Infrastructure
Supermicro High Performance Enterprise Hadoop Infrastructuretempledf
 
Introduction to Apache Hivemall v0.5.2 and v0.6
Introduction to Apache Hivemall v0.5.2 and v0.6Introduction to Apache Hivemall v0.5.2 and v0.6
Introduction to Apache Hivemall v0.5.2 and v0.6Makoto Yui
 
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合Hitoshi Sato
 
Hadoop Summit 2015: Performance Optimization at Scale, Lessons Learned at Twi...
Hadoop Summit 2015: Performance Optimization at Scale, Lessons Learned at Twi...Hadoop Summit 2015: Performance Optimization at Scale, Lessons Learned at Twi...
Hadoop Summit 2015: Performance Optimization at Scale, Lessons Learned at Twi...Alex Levenson
 
PG-Strom - GPU Accelerated Asyncr
PG-Strom - GPU Accelerated AsyncrPG-Strom - GPU Accelerated Asyncr
PG-Strom - GPU Accelerated AsyncrKohei KaiGai
 
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big DataABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big DataHitoshi Sato
 
Scale Out Your Graph Across Servers and Clouds with OrientDB
Scale Out Your Graph Across Servers and Clouds  with OrientDBScale Out Your Graph Across Servers and Clouds  with OrientDB
Scale Out Your Graph Across Servers and Clouds with OrientDBLuca Garulli
 

Mais procurados (20)

GPU/SSD Accelerates PostgreSQL - challenge towards query processing throughpu...
GPU/SSD Accelerates PostgreSQL - challenge towards query processing throughpu...GPU/SSD Accelerates PostgreSQL - challenge towards query processing throughpu...
GPU/SSD Accelerates PostgreSQL - challenge towards query processing throughpu...
 
PG-Strom v2.0 Technical Brief (17-Apr-2018)
PG-Strom v2.0 Technical Brief (17-Apr-2018)PG-Strom v2.0 Technical Brief (17-Apr-2018)
PG-Strom v2.0 Technical Brief (17-Apr-2018)
 
GPU Accelerated Data Science with RAPIDS - ODSC West 2020
GPU Accelerated Data Science with RAPIDS - ODSC West 2020GPU Accelerated Data Science with RAPIDS - ODSC West 2020
GPU Accelerated Data Science with RAPIDS - ODSC West 2020
 
20171206 PGconf.ASIA LT gstore_fdw
20171206 PGconf.ASIA LT gstore_fdw20171206 PGconf.ASIA LT gstore_fdw
20171206 PGconf.ASIA LT gstore_fdw
 
Demystifying DataFrame and Dataset
Demystifying DataFrame and DatasetDemystifying DataFrame and Dataset
Demystifying DataFrame and Dataset
 
Gruter_TECHDAY_2014_04_TajoCloudHandsOn (in Korean)
Gruter_TECHDAY_2014_04_TajoCloudHandsOn (in Korean)Gruter_TECHDAY_2014_04_TajoCloudHandsOn (in Korean)
Gruter_TECHDAY_2014_04_TajoCloudHandsOn (in Korean)
 
Hadoop pig
Hadoop pigHadoop pig
Hadoop pig
 
Apache Nemo
Apache NemoApache Nemo
Apache Nemo
 
Hadoop World 2011: The Powerful Marriage of R and Hadoop - David Champagne, R...
Hadoop World 2011: The Powerful Marriage of R and Hadoop - David Champagne, R...Hadoop World 2011: The Powerful Marriage of R and Hadoop - David Champagne, R...
Hadoop World 2011: The Powerful Marriage of R and Hadoop - David Champagne, R...
 
Japan Lustre User Group 2014
Japan Lustre User Group 2014Japan Lustre User Group 2014
Japan Lustre User Group 2014
 
PL/CUDA - Fusion of HPC Grade Power with In-Database Analytics
PL/CUDA - Fusion of HPC Grade Power with In-Database AnalyticsPL/CUDA - Fusion of HPC Grade Power with In-Database Analytics
PL/CUDA - Fusion of HPC Grade Power with In-Database Analytics
 
Supermicro High Performance Enterprise Hadoop Infrastructure
Supermicro High Performance Enterprise Hadoop InfrastructureSupermicro High Performance Enterprise Hadoop Infrastructure
Supermicro High Performance Enterprise Hadoop Infrastructure
 
Introduction to Apache Hivemall v0.5.2 and v0.6
Introduction to Apache Hivemall v0.5.2 and v0.6Introduction to Apache Hivemall v0.5.2 and v0.6
Introduction to Apache Hivemall v0.5.2 and v0.6
 
Streams on wires
Streams on wiresStreams on wires
Streams on wires
 
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合
AI橋渡しクラウド(ABCI)における高性能計算とAI/ビッグデータ処理の融合
 
Hadoop Summit 2015: Performance Optimization at Scale, Lessons Learned at Twi...
Hadoop Summit 2015: Performance Optimization at Scale, Lessons Learned at Twi...Hadoop Summit 2015: Performance Optimization at Scale, Lessons Learned at Twi...
Hadoop Summit 2015: Performance Optimization at Scale, Lessons Learned at Twi...
 
myHadoop 0.30
myHadoop 0.30myHadoop 0.30
myHadoop 0.30
 
PG-Strom - GPU Accelerated Asyncr
PG-Strom - GPU Accelerated AsyncrPG-Strom - GPU Accelerated Asyncr
PG-Strom - GPU Accelerated Asyncr
 
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big DataABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
 
Scale Out Your Graph Across Servers and Clouds with OrientDB
Scale Out Your Graph Across Servers and Clouds  with OrientDBScale Out Your Graph Across Servers and Clouds  with OrientDB
Scale Out Your Graph Across Servers and Clouds with OrientDB
 

Semelhante a Full-throttle running on Terabytes log-data with PG-Strom

20181210 - PGconf.ASIA Unconference
20181210 - PGconf.ASIA Unconference20181210 - PGconf.ASIA Unconference
20181210 - PGconf.ASIA UnconferenceKohei KaiGai
 
20170602_OSSummit_an_intelligent_storage
20170602_OSSummit_an_intelligent_storage20170602_OSSummit_an_intelligent_storage
20170602_OSSummit_an_intelligent_storageKohei KaiGai
 
Advancing GPU Analytics with RAPIDS Accelerator for Spark and Alluxio
Advancing GPU Analytics with RAPIDS Accelerator for Spark and AlluxioAdvancing GPU Analytics with RAPIDS Accelerator for Spark and Alluxio
Advancing GPU Analytics with RAPIDS Accelerator for Spark and AlluxioAlluxio, Inc.
 
GPGPU Accelerates PostgreSQL (English)
GPGPU Accelerates PostgreSQL (English)GPGPU Accelerates PostgreSQL (English)
GPGPU Accelerates PostgreSQL (English)Kohei KaiGai
 
PGConf APAC 2018 - PostgreSQL performance comparison in various clouds
PGConf APAC 2018 - PostgreSQL performance comparison in various cloudsPGConf APAC 2018 - PostgreSQL performance comparison in various clouds
PGConf APAC 2018 - PostgreSQL performance comparison in various cloudsPGConf APAC
 
Introducing Amazon EC2 P3 Instance - Featuring the Most Powerful GPU for Mach...
Introducing Amazon EC2 P3 Instance - Featuring the Most Powerful GPU for Mach...Introducing Amazon EC2 P3 Instance - Featuring the Most Powerful GPU for Mach...
Introducing Amazon EC2 P3 Instance - Featuring the Most Powerful GPU for Mach...Amazon Web Services
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red_Hat_Storage
 
Dataplane networking acceleration with OpenDataplane / Максим Уваров (Linaro)
Dataplane networking acceleration with OpenDataplane / Максим Уваров (Linaro)Dataplane networking acceleration with OpenDataplane / Максим Уваров (Linaro)
Dataplane networking acceleration with OpenDataplane / Максим Уваров (Linaro)Ontico
 
QCon2016--Drive Best Spark Performance on AI
QCon2016--Drive Best Spark Performance on AIQCon2016--Drive Best Spark Performance on AI
QCon2016--Drive Best Spark Performance on AILex Yu
 
Packaging Strategy for Community Openstack and Implementation Reference | Hoj...
Packaging Strategy for Community Openstack and Implementation Reference | Hoj...Packaging Strategy for Community Openstack and Implementation Reference | Hoj...
Packaging Strategy for Community Openstack and Implementation Reference | Hoj...Vietnam Open Infrastructure User Group
 
GPGPU Accelerates PostgreSQL ~Unlock the power of multi-thousand cores~
GPGPU Accelerates PostgreSQL ~Unlock the power of multi-thousand cores~GPGPU Accelerates PostgreSQL ~Unlock the power of multi-thousand cores~
GPGPU Accelerates PostgreSQL ~Unlock the power of multi-thousand cores~Kohei KaiGai
 
Fórum E-Commerce Brasil | Tecnologias NVIDIA aplicadas ao e-commerce. Muito a...
Fórum E-Commerce Brasil | Tecnologias NVIDIA aplicadas ao e-commerce. Muito a...Fórum E-Commerce Brasil | Tecnologias NVIDIA aplicadas ao e-commerce. Muito a...
Fórum E-Commerce Brasil | Tecnologias NVIDIA aplicadas ao e-commerce. Muito a...E-Commerce Brasil
 
Deep Dive on Amazon EC2 Accelerated Computing - AWS Online Tech Talks
Deep Dive on Amazon EC2 Accelerated Computing - AWS Online Tech TalksDeep Dive on Amazon EC2 Accelerated Computing - AWS Online Tech Talks
Deep Dive on Amazon EC2 Accelerated Computing - AWS Online Tech TalksAmazon Web Services
 
Sqream DB on OpenPOWER performance
Sqream DB on OpenPOWER performanceSqream DB on OpenPOWER performance
Sqream DB on OpenPOWER performanceGanesan Narayanasamy
 
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed_Hat_Storage
 
Fast data in times of crisis with GPU accelerated database QikkDB | Business ...
Fast data in times of crisis with GPU accelerated database QikkDB | Business ...Fast data in times of crisis with GPU accelerated database QikkDB | Business ...
Fast data in times of crisis with GPU accelerated database QikkDB | Business ...Matej Misik
 
SQL+GPU+SSD=∞ (English)
SQL+GPU+SSD=∞ (English)SQL+GPU+SSD=∞ (English)
SQL+GPU+SSD=∞ (English)Kohei KaiGai
 
RAPIDS: GPU-Accelerated ETL and Feature Engineering
RAPIDS: GPU-Accelerated ETL and Feature EngineeringRAPIDS: GPU-Accelerated ETL and Feature Engineering
RAPIDS: GPU-Accelerated ETL and Feature EngineeringKeith Kraus
 

Semelhante a Full-throttle running on Terabytes log-data with PG-Strom (20)

20181210 - PGconf.ASIA Unconference
20181210 - PGconf.ASIA Unconference20181210 - PGconf.ASIA Unconference
20181210 - PGconf.ASIA Unconference
 
20170602_OSSummit_an_intelligent_storage
20170602_OSSummit_an_intelligent_storage20170602_OSSummit_an_intelligent_storage
20170602_OSSummit_an_intelligent_storage
 
Advancing GPU Analytics with RAPIDS Accelerator for Spark and Alluxio
Advancing GPU Analytics with RAPIDS Accelerator for Spark and AlluxioAdvancing GPU Analytics with RAPIDS Accelerator for Spark and Alluxio
Advancing GPU Analytics with RAPIDS Accelerator for Spark and Alluxio
 
GPGPU Accelerates PostgreSQL (English)
GPGPU Accelerates PostgreSQL (English)GPGPU Accelerates PostgreSQL (English)
GPGPU Accelerates PostgreSQL (English)
 
PGConf APAC 2018 - PostgreSQL performance comparison in various clouds
PGConf APAC 2018 - PostgreSQL performance comparison in various cloudsPGConf APAC 2018 - PostgreSQL performance comparison in various clouds
PGConf APAC 2018 - PostgreSQL performance comparison in various clouds
 
SQREAM DB on IBM Power9
SQREAM DB on IBM Power9SQREAM DB on IBM Power9
SQREAM DB on IBM Power9
 
Introducing Amazon EC2 P3 Instance - Featuring the Most Powerful GPU for Mach...
Introducing Amazon EC2 P3 Instance - Featuring the Most Powerful GPU for Mach...Introducing Amazon EC2 P3 Instance - Featuring the Most Powerful GPU for Mach...
Introducing Amazon EC2 P3 Instance - Featuring the Most Powerful GPU for Mach...
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
 
Dataplane networking acceleration with OpenDataplane / Максим Уваров (Linaro)
Dataplane networking acceleration with OpenDataplane / Максим Уваров (Linaro)Dataplane networking acceleration with OpenDataplane / Максим Уваров (Linaro)
Dataplane networking acceleration with OpenDataplane / Максим Уваров (Linaro)
 
RAPIDS Overview
RAPIDS OverviewRAPIDS Overview
RAPIDS Overview
 
QCon2016--Drive Best Spark Performance on AI
QCon2016--Drive Best Spark Performance on AIQCon2016--Drive Best Spark Performance on AI
QCon2016--Drive Best Spark Performance on AI
 
Packaging Strategy for Community Openstack and Implementation Reference | Hoj...
Packaging Strategy for Community Openstack and Implementation Reference | Hoj...Packaging Strategy for Community Openstack and Implementation Reference | Hoj...
Packaging Strategy for Community Openstack and Implementation Reference | Hoj...
 
GPGPU Accelerates PostgreSQL ~Unlock the power of multi-thousand cores~
GPGPU Accelerates PostgreSQL ~Unlock the power of multi-thousand cores~GPGPU Accelerates PostgreSQL ~Unlock the power of multi-thousand cores~
GPGPU Accelerates PostgreSQL ~Unlock the power of multi-thousand cores~
 
Fórum E-Commerce Brasil | Tecnologias NVIDIA aplicadas ao e-commerce. Muito a...
Fórum E-Commerce Brasil | Tecnologias NVIDIA aplicadas ao e-commerce. Muito a...Fórum E-Commerce Brasil | Tecnologias NVIDIA aplicadas ao e-commerce. Muito a...
Fórum E-Commerce Brasil | Tecnologias NVIDIA aplicadas ao e-commerce. Muito a...
 
Deep Dive on Amazon EC2 Accelerated Computing - AWS Online Tech Talks
Deep Dive on Amazon EC2 Accelerated Computing - AWS Online Tech TalksDeep Dive on Amazon EC2 Accelerated Computing - AWS Online Tech Talks
Deep Dive on Amazon EC2 Accelerated Computing - AWS Online Tech Talks
 
Sqream DB on OpenPOWER performance
Sqream DB on OpenPOWER performanceSqream DB on OpenPOWER performance
Sqream DB on OpenPOWER performance
 
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based HardwareRed hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
 
Fast data in times of crisis with GPU accelerated database QikkDB | Business ...
Fast data in times of crisis with GPU accelerated database QikkDB | Business ...Fast data in times of crisis with GPU accelerated database QikkDB | Business ...
Fast data in times of crisis with GPU accelerated database QikkDB | Business ...
 
SQL+GPU+SSD=∞ (English)
SQL+GPU+SSD=∞ (English)SQL+GPU+SSD=∞ (English)
SQL+GPU+SSD=∞ (English)
 
RAPIDS: GPU-Accelerated ETL and Feature Engineering
RAPIDS: GPU-Accelerated ETL and Feature EngineeringRAPIDS: GPU-Accelerated ETL and Feature Engineering
RAPIDS: GPU-Accelerated ETL and Feature Engineering
 

Mais de Kohei KaiGai

20221116_DBTS_PGStrom_History
20221116_DBTS_PGStrom_History20221116_DBTS_PGStrom_History
20221116_DBTS_PGStrom_HistoryKohei KaiGai
 
20221111_JPUG_CustomScan_API
20221111_JPUG_CustomScan_API20221111_JPUG_CustomScan_API
20221111_JPUG_CustomScan_APIKohei KaiGai
 
20211112_jpugcon_gpu_and_arrow
20211112_jpugcon_gpu_and_arrow20211112_jpugcon_gpu_and_arrow
20211112_jpugcon_gpu_and_arrowKohei KaiGai
 
20210928_pgunconf_hll_count
20210928_pgunconf_hll_count20210928_pgunconf_hll_count
20210928_pgunconf_hll_countKohei KaiGai
 
20210731_OSC_Kyoto_PGStrom3.0
20210731_OSC_Kyoto_PGStrom3.020210731_OSC_Kyoto_PGStrom3.0
20210731_OSC_Kyoto_PGStrom3.0Kohei KaiGai
 
20210511_PGStrom_GpuCache
20210511_PGStrom_GpuCache20210511_PGStrom_GpuCache
20210511_PGStrom_GpuCacheKohei KaiGai
 
20201113_PGconf_Japan_GPU_PostGIS
20201113_PGconf_Japan_GPU_PostGIS20201113_PGconf_Japan_GPU_PostGIS
20201113_PGconf_Japan_GPU_PostGISKohei KaiGai
 
20200828_OSCKyoto_Online
20200828_OSCKyoto_Online20200828_OSCKyoto_Online
20200828_OSCKyoto_OnlineKohei KaiGai
 
20200806_PGStrom_PostGIS_GstoreFdw
20200806_PGStrom_PostGIS_GstoreFdw20200806_PGStrom_PostGIS_GstoreFdw
20200806_PGStrom_PostGIS_GstoreFdwKohei KaiGai
 
20200424_Writable_Arrow_Fdw
20200424_Writable_Arrow_Fdw20200424_Writable_Arrow_Fdw
20200424_Writable_Arrow_FdwKohei KaiGai
 
20191211_Apache_Arrow_Meetup_Tokyo
20191211_Apache_Arrow_Meetup_Tokyo20191211_Apache_Arrow_Meetup_Tokyo
20191211_Apache_Arrow_Meetup_TokyoKohei KaiGai
 
20191115-PGconf.Japan
20191115-PGconf.Japan20191115-PGconf.Japan
20191115-PGconf.JapanKohei KaiGai
 
20190926_Try_RHEL8_NVMEoF_Beta
20190926_Try_RHEL8_NVMEoF_Beta20190926_Try_RHEL8_NVMEoF_Beta
20190926_Try_RHEL8_NVMEoF_BetaKohei KaiGai
 
20190925_DBTS_PGStrom
20190925_DBTS_PGStrom20190925_DBTS_PGStrom
20190925_DBTS_PGStromKohei KaiGai
 
20190516_DLC10_PGStrom
20190516_DLC10_PGStrom20190516_DLC10_PGStrom
20190516_DLC10_PGStromKohei KaiGai
 
20190418_PGStrom_on_ArrowFdw
20190418_PGStrom_on_ArrowFdw20190418_PGStrom_on_ArrowFdw
20190418_PGStrom_on_ArrowFdwKohei KaiGai
 
20190314 PGStrom Arrow_Fdw
20190314 PGStrom Arrow_Fdw20190314 PGStrom Arrow_Fdw
20190314 PGStrom Arrow_FdwKohei KaiGai
 
20181212 - PGconf.ASIA - LT
20181212 - PGconf.ASIA - LT20181212 - PGconf.ASIA - LT
20181212 - PGconf.ASIA - LTKohei KaiGai
 
20181211 - PGconf.ASIA - NVMESSD&GPU for BigData
20181211 - PGconf.ASIA - NVMESSD&GPU for BigData20181211 - PGconf.ASIA - NVMESSD&GPU for BigData
20181211 - PGconf.ASIA - NVMESSD&GPU for BigDataKohei KaiGai
 
20180920_DBTS_PGStrom_JP
20180920_DBTS_PGStrom_JP20180920_DBTS_PGStrom_JP
20180920_DBTS_PGStrom_JPKohei KaiGai
 

Mais de Kohei KaiGai (20)

20221116_DBTS_PGStrom_History
20221116_DBTS_PGStrom_History20221116_DBTS_PGStrom_History
20221116_DBTS_PGStrom_History
 
20221111_JPUG_CustomScan_API
20221111_JPUG_CustomScan_API20221111_JPUG_CustomScan_API
20221111_JPUG_CustomScan_API
 
20211112_jpugcon_gpu_and_arrow
20211112_jpugcon_gpu_and_arrow20211112_jpugcon_gpu_and_arrow
20211112_jpugcon_gpu_and_arrow
 
20210928_pgunconf_hll_count
20210928_pgunconf_hll_count20210928_pgunconf_hll_count
20210928_pgunconf_hll_count
 
20210731_OSC_Kyoto_PGStrom3.0
20210731_OSC_Kyoto_PGStrom3.020210731_OSC_Kyoto_PGStrom3.0
20210731_OSC_Kyoto_PGStrom3.0
 
20210511_PGStrom_GpuCache
20210511_PGStrom_GpuCache20210511_PGStrom_GpuCache
20210511_PGStrom_GpuCache
 
20201113_PGconf_Japan_GPU_PostGIS
20201113_PGconf_Japan_GPU_PostGIS20201113_PGconf_Japan_GPU_PostGIS
20201113_PGconf_Japan_GPU_PostGIS
 
20200828_OSCKyoto_Online
20200828_OSCKyoto_Online20200828_OSCKyoto_Online
20200828_OSCKyoto_Online
 
20200806_PGStrom_PostGIS_GstoreFdw
20200806_PGStrom_PostGIS_GstoreFdw20200806_PGStrom_PostGIS_GstoreFdw
20200806_PGStrom_PostGIS_GstoreFdw
 
20200424_Writable_Arrow_Fdw
20200424_Writable_Arrow_Fdw20200424_Writable_Arrow_Fdw
20200424_Writable_Arrow_Fdw
 
20191211_Apache_Arrow_Meetup_Tokyo
20191211_Apache_Arrow_Meetup_Tokyo20191211_Apache_Arrow_Meetup_Tokyo
20191211_Apache_Arrow_Meetup_Tokyo
 
20191115-PGconf.Japan
20191115-PGconf.Japan20191115-PGconf.Japan
20191115-PGconf.Japan
 
20190926_Try_RHEL8_NVMEoF_Beta
20190926_Try_RHEL8_NVMEoF_Beta20190926_Try_RHEL8_NVMEoF_Beta
20190926_Try_RHEL8_NVMEoF_Beta
 
20190925_DBTS_PGStrom
20190925_DBTS_PGStrom20190925_DBTS_PGStrom
20190925_DBTS_PGStrom
 
20190516_DLC10_PGStrom
20190516_DLC10_PGStrom20190516_DLC10_PGStrom
20190516_DLC10_PGStrom
 
20190418_PGStrom_on_ArrowFdw
20190418_PGStrom_on_ArrowFdw20190418_PGStrom_on_ArrowFdw
20190418_PGStrom_on_ArrowFdw
 
20190314 PGStrom Arrow_Fdw
20190314 PGStrom Arrow_Fdw20190314 PGStrom Arrow_Fdw
20190314 PGStrom Arrow_Fdw
 
20181212 - PGconf.ASIA - LT
20181212 - PGconf.ASIA - LT20181212 - PGconf.ASIA - LT
20181212 - PGconf.ASIA - LT
 
20181211 - PGconf.ASIA - NVMESSD&GPU for BigData
20181211 - PGconf.ASIA - NVMESSD&GPU for BigData20181211 - PGconf.ASIA - NVMESSD&GPU for BigData
20181211 - PGconf.ASIA - NVMESSD&GPU for BigData
 
20180920_DBTS_PGStrom_JP
20180920_DBTS_PGStrom_JP20180920_DBTS_PGStrom_JP
20180920_DBTS_PGStrom_JP
 

Último

How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerThousandEyes
 
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceCALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceanilsa9823
 
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AISyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AIABDERRAOUF MEHENNI
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsJhone kinadey
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Modelsaagamshah0812
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfkalichargn70th171
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...MyIntelliSource, Inc.
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...ICS
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...harshavardhanraghave
 
Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxbodapatigopi8531
 
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️anilsa9823
 
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...kellynguyen01
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providermohitmore19
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comFatema Valibhai
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️Delhi Call girls
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Steffen Staab
 
How To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsHow To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsAndolasoft Inc
 

Último (20)

How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
 
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceCALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
 
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AISyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial Goals
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
 
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS LiveVip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
 
Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptx
 
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
 
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
 
TECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service providerTECUNIQUE: Success Stories: IT Service provider
TECUNIQUE: Success Stories: IT Service provider
 
HR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.comHR Software Buyers Guide in 2024 - HRSoftware.com
HR Software Buyers Guide in 2024 - HRSoftware.com
 
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
call girls in Vaishali (Ghaziabad) 🔝 >༒8448380779 🔝 genuine Escort Service 🔝✔️✔️
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
 
How To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.jsHow To Use Server-Side Rendering with Nuxt.js
How To Use Server-Side Rendering with Nuxt.js
 
Microsoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdfMicrosoft AI Transformation Partner Playbook.pdf
Microsoft AI Transformation Partner Playbook.pdf
 

Full-throttle running on Terabytes log-data with PG-Strom

  • 1. Full-throttle running on Terabytes log-data HeteroDB,Inc Chief Architect & CEO KaiGai Kohei <kaigai@heterodb.com>
  • 2.
  • 3. Self-Introduction  KaiGai Kohei (海外浩平)  Chief Architect & CEO of HeteroDB  Contributor of PostgreSQL (2006-)  Primary Developer of PG-Strom (2012-)  Interested in: Big-data, GPU, NVME/PMEM, ... about myself about our company  Established: 4th-Jul-2017  Location: Shinagawa, Tokyo, Japan  Businesses: ✓ Sales & development of high-performance data-processing software on top of heterogeneous architecture. ✓ Technology consulting service on GPU&DB area. PG-Strom PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data3
  • 4. What is PG-Strom? PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data4  Transparent GPU acceleration for analytics and reporting workloads  Binary code generation by JIT from SQL statements  PCIe-bus level optimization by SSD-to-GPU Direct SQL technology  Columnar-storage for efficient I/O and vector processors PG-Strom is an extension of PostgreSQL for terabytes scale data-processing and inter-operation of AI/ML, by utilization of GPU and NVME-SSD. App GPU off-loading for IoT/Big-Data for ML/Analytics ➢ SSD-to-GPU Direct SQL ➢ Columnar Store (Arrow_Fdw) ➢ Asymmetric Partition-wise JOIN/GROUP BY ➢ BRIN-Index Support ➢ NVME-over-Fabric Support ➢ Procedural Language for GPU native code (PL/CUDA) ➢ NVIDIA RAPIDS data frame support (WIP) ➢ IPC on GPU device memory
  • 5. PG-Strom as an open source project PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data5 Official documentation in English / Japanese http://heterodb.github.io/pg-strom/ Many stars ☺ Distributed under GPL-v2.0 Has been developed since 2012 ▌Supported PostgreSQL versions  PostgreSQL v11.x, v10.x and v9.6.x ▌Related contributions to PostgreSQL  Writable FDW support (9.3)  Custom-Scan interface (9.5)  FDW and Custom-Scan JOIN pushdown (9.5) ▌Talks at PostgreSQL community  PGconf.EU 2012 (Prague), 2015 (Vienna), 2018 (Lisbon)  PGconf.ASIA 2018 (Tokyo), 2019 (Bali, Indonesia)  PGconf.SV 2016 (San Francisco)  PGconf.China 2015 (Beijing)
  • 6. Terabytes scale log-data processing on PostgreSQL PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data6
  • 7. System image of log-data platform PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data7 Manufacturing Logistics Mobile Home electronics  Any kind of devices generate various form of log-data.  People want/try to find out insight from the data.  Data importing and processing must be rapid.  System administration should be simple and easy. JBoF: Just Bunch of Flash NVME over Fabric (RDMA) DB Admin BI Tools Machine-learning applications e.g, anomaly detection shared data-frame PG-Strom
  • 8. Why elephant is cobalt blue, not yellow. ▌Here is PGconf.ASIA 2019 :-) ▌Standalone system is much simpler than multi-node cluster system. ▌Engineers are familiar with PostgreSQL for more than 10 years ▌How many users actually have petabytes scale data? PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data8
  • 9. How much data size we expect? (1/3) ▌This is a factory of an international top-brand company. ▌Most of users don’t have bigger data than them. Case: A semiconductor factory managed up to 100TB with PostgreSQL Total 20TB Per Year Kept for 5 years Defect Investigation PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data9
  • 10. How much data size we expect? (2/3) Nowadays, 2U server is capable for 100TB capacity model Supermicro 2029U-TN24R4T Qty CPU Intel Xeon Gold 6226 (12C, 2.7GHz) 2 RAM 32GB RDIMM (DDR4-2933, ECC) 12 GPU NVIDIA Tesla P40 (3840C, 24GB) 2 HDD Seagate 1.0TB SATA (7.2krpm) 1 NVME Intel DC P4510 (8.0TB, U.2) 24 N/W built-in 10GBase-T 4 8.0TB x 24 = 192TB How much is it? 60,932USD by thinkmate.com (31st-Aug-2019) PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data10
  • 11. How much data size we expect? (3/3) On the other hands, it is not small enough. TB TB TB B PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data11
  • 12. Weapons to tackle terabytes scale data • SSD-to-GPU Direct SQL • Direct data transfer pulls out maximum performance of NVME for SQL workloads Efficient Storage • Arrow_Fdw • Columnar store that is optimal for both of I/O throughput and vector processors Efficient Data Structure • PostgreSQL Partitioning & Asymmetric Partition-wise JOIN • Prune the range obviously unreferenced Partitioning PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data12
  • 13. PG-Strom: SSD-to-GPU Direct SQL Efficient Storage PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data13
  • 14. GPU’s characteristics - mostly as a computing accelerator PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data14 Over 10years history in HPC, then massive popularization in Machine-Learning NVIDIA Tesla V100 Super Computer (TITEC; TSUBAME3.0) Computer Graphics Machine-Learning Today’s Topic How I/O workloads are accelerated by GPU that is a computing accelerator? Simulation
  • 15. A usual composition of x86_64 server GPUSSD CPU RAM HDD N/W PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data15
  • 16. Data flow to process a massive amount of data CPU RAM SSD GPU PCIe PostgreSQL Data Blocks Normal Data Flow All the records, including junks, must be loaded onto RAM once, because software cannot check necessity of the rows prior to the data loading. So, amount of the I/O traffic over PCIe bus tends to be large. PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data16 Unless records are not loaded to CPU/RAM once, over the PCIe bus, software cannot check its necessity even if they are “junk” records.
  • 17. SSD-to-GPU Direct SQL (1/4) – Overview CPU RAM SSD GPU PCIe PostgreSQL Data Blocks NVIDIA GPUDirect RDMA It allows to load the data blocks on NVME-SSD to GPU using peer-to-peer DMA over PCIe-bus; bypassing CPU/RAM. WHERE-clause JOIN GROUP BY Run SQL by GPU to reduce the data size Data Size: Small PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data17 v2.0
  • 18. Background - GPUDirect RDMA by NVIDIA Physical address space PCIe BAR1 Area GPU device memory RAM NVMe-SSD Infiniband HBA PCIe devices GPUDirect RDMA It enables to map GPU device memory on physical address space of the host system Once “physical address of GPU device memory” appears, we can use is as source or destination address of DMA with PCIe devices. PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data18 0xf0000000 0xe0000000 DMA Request SRC: 1200th sector LEN: 40 sectors DST: 0xe0200000
  • 19. SSD-to-GPU Direct SQL (2/4) – System configuration and workloads PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data19 Supermicro SYS-1019GP-TT CPU Xeon Gold 6126T (2.6GHz, 12C) x1 RAM 192GB (32GB DDR4-2666 x 6) GPU NVIDIA Tesla V100 (5120C, 16GB) x1 SSD Intel SSD DC P4600 (HHHL; 2.0TB) x3 (striping configuration by md-raid0) HDD 2.0TB(SATA; 72krpm) x6 Network 10Gb Ethernet 2ports OS Red Hat Enterprise Linux 7.6 CUDA 10.1 + NVIDIA Driver 418.40.04 DB PostgreSQL v11.2 PG-Strom v2.2devel ■ Query Example (Q2_3) SELECT sum(lo_revenue), d_year, p_brand1 FROM lineorder, date1, part, supplier WHERE lo_orderdate = d_datekey AND lo_partkey = p_partkey AND lo_suppkey = s_suppkey AND p_category = 'MFGR#12‘ AND s_region = 'AMERICA‘ GROUP BY d_year, p_brand1 ORDER BY d_year, p_brand1; customer 12M rows (1.6GB) date1 2.5K rows (400KB) part 1.8M rows (206MB) supplier 4.0M rows (528MB) lineorder 2.4B rows (351GB) Summarizing queries for typical Star-Schema structure on simple 1U server
  • 20. SSD-to-GPU Direct SQL (3/4) – Benchmark results PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data20  Query Execution Throughput = (353GB; DB-size) / (Query response time [sec])  SSD-to-GPU Direct SQL runs the workloads close to the hardware limitation (8.5GB/s)  about x3 times faster than filesystem and CPU based mechanism 2343.7 2320.6 2315.8 2233.1 2223.9 2216.8 2167.8 2281.9 2277.6 2275.9 2172.0 2198.8 2233.3 7880.4 7924.7 7912.5 7725.8 7712.6 7827.2 7398.1 7258.0 7638.5 7750.4 7402.8 7419.6 7639.8 0 1,000 2,000 3,000 4,000 5,000 6,000 7,000 8,000 9,000 Q1_1 Q1_2 Q1_3 Q2_1 Q2_2 Q2_3 Q3_1 Q3_2 Q3_3 Q3_4 Q4_1 Q4_2 Q4_3 QueryExecutionThroughput[MB/s] Star Schema Benchmark on 1xNVIDIA Tesla V100 + 3xIntel DC P4600 PostgreSQL 11.2 PG-Strom 2.2devel
  • 21. SSD-to-GPU Direct SQL (4/4) – Software Stack PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data21 Filesystem (ext4, xfs) nvme_strom kernel module NVMe SSD PostgreSQL pg_strom extension read(2) ioctl(2) Operating System Software Layer Database Software Layer blk-mq nvme pcie nvme rdma Network HBA NVMe Request Network HBANVMe SSD NVME-over-Fabric Target RDMA over Converged Ethernet GPUDirect RDMA Translation from logical location to physical location on the drive ■ Other software ■ Software by HeteroDB ■ Hardware Special NVME READ command that loads the source blocks on NVME- SSD to GPU’s device memory. External Storage Server Local Storage Layer
  • 22. PG-Strom: Arrow_Fdw (Columnar Store) Efficient Data Structure PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data22
  • 23. Background: Apache Arrow (1/2) ▌Characteristics  Column-oriented data format designed for analytics workloads  A common data format for inter-application exchange  Various primitive data types like integer, floating-point, date/time and so on PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data23 PostgreSQL / PG-Strom NVIDIA GPU
  • 24. Background: Apache Arrow (2/2) Apache Arrow Data Types PostgreSQL Data Types extra description Int int2, int4, int8 FloatingPoint float2, float4, float8 float2 is an enhancement of PG-Strom Binary bytea Utf8 text Bool bool Decimal numeric Date date adjusted to unitsz = Day Time time adjusted to unitsz = MicroSecond Timestamp timestamp adjusted to unitsz = MicroSecond Interval interval List array types Only 1-dimensional array is supportable Struct composite types Union ------ FixedSizeBinary char(n) FixedSizeList ------ Map ------ Most of data types are convertible between Apache Arrow and PostgreSQL. PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data24
  • 25. Log-data characteristics (1/2) – WHERE is it generated on? ETL OLTP OLAP Traditional OLTP&OLAP – Data is generated inside of database system Data Creation IoT/M2M use case – Data is generated outside of database system Log processing BI Tools BI Tools Gateway Server Data Creation Data Creation Many Devices PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data25 Data Importing becomes a heavy time-consuming operations for big-data processing. Data Import Import!
  • 26. Background – FDW (Foreign Data Wrapper) PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data26  FDW module is responsible to transform external data into PostgreSQL internal data.  In case of Arrow_Fdw, it maps Arrow-files on the filesystem as foreign-table. ➔ Just mapping, so no need to import the external data again. Foreign Table – it allows to read (and potentially write) external data source as like normal PostgreSQL tables, for SQL commands. PostgreSQL Table Foreign Table postgres_fdw Foreign Table file_fdw Foreign Table twitter_fdw Foreign Table Arrow_fdw External RDBMS CSV Files Twitter (Web API) Arrow Files
  • 27. SSD-to-GPU Direct SQL on Arrow_Fdw (1/3) PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data27 ▌Why Apache Arrow is beneficial?  Less amount of I/O to be loaded; only referenced columns  Higher utilization of GPU core; by vector processing and wide memory bus  Read-only structure; No MVCC checks are required on run-time It transfers ONLY Referenced Columns over SSD-to-GPU Direct SQL mechanism PCIe Bus NVMe SSD GPU SSD-to-GPU P2P DMA WHERE-clause JOIN GROUP BY P2P data transfer only referenced columns GPU code supports Apache Arrow format as data source. Runs SQL workloads in parallel by thousands cores. Write back the small results built as heap-tuple of PostgreSQLResults metadata
  • 28. SSD-to-GPU Direct SQL on Arrow_Fdw (2/3) – Benchmark results  Query Execution Throughput = (353GB; DB or 310GB; arrow) / (Query response time[s])  Combined use of SSD-to-GPU Direct SQL and Columnar-store pulled out 15GB-49GB/s query execution throughput according to the number of referenced columns.  See p.10 for the server configuration; just 1U-rackserver with 1 CPU+1 GPU+3 SSD 2343.7 2320.6 2315.8 2233.1 2223.9 2216.8 2167.8 2281.9 2277.6 2275.9 2172.0 2198.8 2233.3 7880.4 7924.7 7912.5 7725.8 7712.6 7827.2 7398.1 7258.0 7638.5 7750.4 7402.8 7419.6 7639.8 49038 27108 29171 28699 27876 29969 19166 20953 20890 22395 15919 15925 17061 0 5,000 10,000 15,000 20,000 25,000 30,000 35,000 40,000 45,000 50,000 Q1_1 Q1_2 Q1_3 Q2_1 Q2_2 Q2_3 Q3_1 Q3_2 Q3_3 Q3_4 Q4_1 Q4_2 Q4_3 QueryExecutionThroughput[MB/s] Star Schema Benchmark on 1xNVIDIA Tesla V100 + 3xIntel DC P4600 PostgreSQL 11.2 PG-Strom 2.2devel PG-Strom 2.2devel + Arrow_Fdw PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data28
  • 29. SSD-to-GPU Direct SQL on Arrow_Fdw (3/3) – Validation of the results Foreign table "public.flineorder" Column | Type | Size --------------------+---------------+-------------------------------- lo_orderkey | numeric | 35.86GB lo_linenumber | integer | 8.96GB lo_custkey | numeric | 35.86GB lo_partkey | integer | 8.96GB lo_suppkey | numeric | 35.86GB <-- ★Referenced by Q2_1 lo_orderdate | integer | 8.96GB <-- ★Referenced by Q2_1 lo_orderpriority | character(15) | 33.61GB <-- ★Referenced by Q2_1 lo_shippriority | character(1) | 2.23GB lo_quantity | integer | 8.96GB lo_extendedprice | bigint | 17.93GB lo_ordertotalprice | bigint | 17.93GB lo_discount | integer | 8.96GB lo_revenue | bigint | 17.93GB lo_supplycost | bigint | 17.93GB <-- ★Referenced by Q2_1 lo_tax | integer | 8.96GB lo_commit_date | character(8) | 17.93GB lo_shipmode | character(10) | 22.41GB FDW options: (file '/opt/nvme/lineorder_s401.arrow') ... file size = 310GB ▌Only 96.4GB of 310GB was actually read from NVME-SSD (31.08%) ▌Execution time of Q2_1 is 11.06s, so 96.4GB / 11.06s = 8.7GB/s ➔ It is a reasonable performance for 3x Intel DC P4600 on single CPU configuration Almost equivalent raw data transfer, but no need to copy unreferenced columns. PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data29
  • 30. Generation of Apache Arrow files ✓ Simply, specifies the SQL to run by “-c” and Arrow- file to be written by “-o” option. $./pg2arrow -h Usage: pg2arrow [OPTION]... [DBNAME [USERNAME]] General options: -d, --dbname=DBNAME database name to connect to -c, --command=COMMAND SQL command to run -f, --file=FILENAME SQL command from file -o, --output=FILENAME result file in Apache Arrow format Arrow format options: -s, --segment-size=SIZE size of record batch for each (default is 256MB) Connection options: -h, --host=HOSTNAME database server host -p, --port=PORT database server port -U, --username=USERNAME database user name -w, --no-password never prompt for password -W, --password force password prompt Debug options: --dump=FILENAME dump information of arrow file --progress shows progress of the job. Pg2Arrow command saves SQL results in Arrow-format. Apache Arrow Data Files Arrow_Fdw Pg2Arrow PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data30
  • 31. PostgreSQL Partitioning and PCIe-bus level optimization Partitioning PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data31
  • 32. Log-data characteristics (2/2) – INSERT-only ✓ MVCC visibility check is (relatively) not significant. ✓ Rows with old timestamp will never inserted. INSERT UPDATE DELETE INSERT UPDATE DELETE Transactional Data Log Data with timestamp current 2018 2017 PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data32
  • 33. Configuration of PostgreSQL partition for log-data (1/2) ▌Mixture of PostgreSQL table and Arrow foreign table in partition declaration ▌Log-data should have timestamp, and never updated ➔ Old data can be moved to Arrow foreign table for more efficient I/O logdata_201812 logdata_201901 logdata_201902 logdata_current logdata table (PARTITION BY timestamp) 2019-03-21 12:34:56 dev_id: 2345 signal: 12.4 Log-data with timestamp PostgreSQL tables Row data store Read-writable but slot Arrow foreign table Column data store Read-only but fast Unrelated child tables are skipped, if query contains WHERE-clause on the timestamp E.g) WHERE timestamp > ‘2019-01-23’ AND device_id = 456 PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data33
  • 34. Configuration of PostgreSQL partition for log-data (2/2) logdata_201812 logdata_201901 logdata_201902 logdata_current logdata table (PARTITION BY timestamp) 2019-03-21 12:34:56 dev_id: 2345 signal: 12.4 logdata_201812 logdata_201901 logdata_201902 logdata_201903 logdata_current logdata table (PARTITION BY timestamp) a month later Extract data of 2019-03 PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data34 ▌Mixture of PostgreSQL table and Arrow foreign table in partition declaration ▌Log-data should have timestamp, and never updated ➔ Old data can be moved to Arrow foreign table for more efficient I/O Log-data with timestamp
  • 35. PCIe-bus level optimization (1/3) PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data35 Physical data distribution, and utilization of closest GPU CPU CPU PCIe switch SSD GPU PCIe switch SSD GPU PCIe switch SSD GPU PCIe switch SSD GPU logdata 201812 logdata 201901 logdata 201902 logdata 201903 Near is Better Far is Worse
  • 36. PCIe-bus level optimization (2/3) PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data36 By P2P DMA over PCIe-switch, major data traffic bypass CPU CPU CPU PCIe switch SSD GPU PCIe switch SSD GPU PCIe switch SSD GPU PCIe switch SSD GPU SCAN SCAN SCAN SCAN JOIN JOIN JOIN JOIN GROUP BY GROUP BY GROUP BY GROUP BY Pre-processed Data (very small) GATHER GATHER
  • 37. PCIe-bus level optimization (3/3) PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data37 Supermicro SYS-4029TRT2 x96 lane PCIe switch x96 lane PCIe switch CPU2 CPU1 QPI Gen3 x16 Gen3 x16 for each slot Gen3 x16 for each slotGen3 x16 ▌HPC Server – optimization for GPUDirect RDMA ▌I/O Expansion Box NEC ExpEther 40G (4slots edition) Network Switch 4 slots of PCIe Gen3 x8 PCIe Swich 40Gb Ethernet CPU NIC Extra I/O Boxes
  • 38. Special Optimization – Combined GPU Kernel PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data38 Reduction of “Data Ping-Pong” is a key of performance GpuScan kernel GpuJoin kernel GpuPreAgg kernel GPU CPU Storage GPU Buffer GPU Buffer results DMA Buffer Agg (PostgreSQL) Combined GPU kernel for SCAN + JOIN + GROUP BY data size = Large data size = Small DMA Buffer SSD-to-GPU Direct SQL DMA Buffer No data transfer ping-pong
  • 39. Asymmetric Partition-wise JOIN (1/5) PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data39 lineorder lineorder_p0 lineorder_p1 lineorder_p2 reminder=0 reminder=1 reminder=2 customer date supplier parts tablespace: nvme0 tablespace: nvme1 tablespace: nvme2 Records from partition-leafs must be backed to CPU and processed once! Scan Scan Scan Append Join Agg Query Results Scan Massive records must be processed on CPU
  • 40. Asymmetric Partition-wise JOIN (2/5) PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data40 lineorder lineorder_p0 lineorder_p1 lineorder_p2 reminder=0 reminder=1 reminder=2 customer date supplier parts Push down JOIN/GROUP BY, even if smaller half is not a partitioned table. Join Append Agg Query Results Scan Scan PreAgg Join Scan PreAgg Join Scan PreAgg tablespace: nvme0 tablespace: nvme1 tablespace: nvme2 Very small number of partial results of JOIN/GROUP BY shall be gathered on CPU. ※ This feature is now under proposition for the PostgreSQL 13 commit fest.
  • 41. Asymmetric Partition-wise JOIN (3/5) postgres=# explain select * from ptable p, t1 where p.a = t1.aid; QUERY PLAN ---------------------------------------------------------------------------------- Hash Join (cost=2.12..24658.62 rows=49950 width=49) Hash Cond: (p.a = t1.aid) -> Append (cost=0.00..20407.00 rows=1000000 width=12) -> Seq Scan on ptable_p0 p (cost=0.00..5134.63 rows=333263 width=12) -> Seq Scan on ptable_p1 p_1 (cost=0.00..5137.97 rows=333497 width=12) -> Seq Scan on ptable_p2 p_2 (cost=0.00..5134.40 rows=333240 width=12) -> Hash (cost=1.50..1.50 rows=50 width=37) -> Seq Scan on t1 (cost=0.00..1.50 rows=50 width=37) (8 rows) PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data41
  • 42. Asymmetric Partition-wise JOIN (4/5) postgres=# set enable_partitionwise_join = on; SET postgres=# explain select * from ptable p, t1 where p.a = t1.aid; QUERY PLAN ---------------------------------------------------------------------------------- Append (cost=2.12..19912.62 rows=49950 width=49) -> Hash Join (cost=2.12..6552.96 rows=16647 width=49) Hash Cond: (p.a = t1.aid) -> Seq Scan on ptable_p0 p (cost=0.00..5134.63 rows=333263 width=12) -> Hash (cost=1.50..1.50 rows=50 width=37) -> Seq Scan on t1 (cost=0.00..1.50 rows=50 width=37) -> Hash Join (cost=2.12..6557.29 rows=16658 width=49) Hash Cond: (p_1.a = t1.aid) -> Seq Scan on ptable_p1 p_1 (cost=0.00..5137.97 rows=333497 width=12) -> Hash (cost=1.50..1.50 rows=50 width=37) -> Seq Scan on t1 (cost=0.00..1.50 rows=50 width=37) -> Hash Join (cost=2.12..6552.62 rows=16645 width=49) Hash Cond: (p_2.a = t1.aid) -> Seq Scan on ptable_p2 p_2 (cost=0.00..5134.40 rows=333240 width=12) -> Hash (cost=1.50..1.50 rows=50 width=37) -> Seq Scan on t1 (cost=0.00..1.50 rows=50 width=37) (16 rows) PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data42
  • 43. Asymmetric Partition-wise JOIN (5/5) May be available at PostgreSQL v13 PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data43
  • 44. (Near) Future works & Conclusion PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data44
  • 45. Benchmarks on HPC Server (1/2) CPU2 CPU1 SerialCables dual PCI-ENC8G-08A (U.2 NVME JBOF; 8slots) x2 NVIDIA Tesla V100 x4 45 PCIe switch PCIe switch PCIe switch PCIe switch Gen3 x16 for each slot Gen3 x16 for each slot via SFF-8644 based PCIe x4 cables (3.2GB/s x 16 = max 51.2GB/s) Intel SSD DC P4510 (1.0TB) x16 SerialCables PCI-AD-x16HE x4 WIP Now facilities are under arrangement. We shall have benchmark project at Sep/Oct-2019. Supermicro SYS-4029GP-TRT PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data
  • 46. Capable to process up to 100TB grade log-data in single node By 4 units parallel execution, 100 - 120GB/s effective data processing capability By columnar storage, 25 - 30GB/s effective data transfer per unit Benchmarks on HPC Server (2/2) QPI Towards effective 100GB/s performance by PCIe bus optimization and columnar-storage CPU2CPU1RAM RAM PCIe-SW PCIe-SW NVME0 NVME1 NVME2 NVME3 NVME4 NVME5 NVME6 NVME7 NVME8 NVME9 NVME10 NVME11 NVME12 NVME13 NVME14 NVME15 GPU0 GPU1 GPU2 GPU3HBA0 HBA1 HBA2 HBA3 8.0 - 10GB/s physical data transfer performance per (GPU + 4 SSD) unit 46 PCIe-SW PCIe-SW JBOF unit-0 JBOF unit-1 WIP PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data
  • 47. Future works ▌Asymmetric Partition-wise JOIN (PostgreSQL v13) ▌MVCC checks on GPU device ▌Data-frame exchange over GPU device memory (NVIDIA RAPIDS) ▌RHEL8 support (Linux kernel driver) ▌DRAM access reductions on GpuJoin ▌More reliable statistics framework ▌Enlargement of regression test cases ▌……and so on PG-Strom development team welcomes your volunteership! PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data47
  • 48. Conclusion PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data48 ▌Characteristics of Log-data  INSERT-only  Must be imported once  Always have timestamp ▌Weapons we can use  SSD-to-GPU Direct SQL  Arrow_Fdw  PostgreSQL Partitioning  PCIe-bug level optimization ▌Why PostgreSQL?  Standalone system is much simple and inexpensive than cluster.  Engineers have been familiar with PostgreSQL over 10 years. Full utilization of H/W, optimized data structure, and proper partitioning enable terabytes scale data-processing on a standalone PostgreSQL system.
  • 49. Resources PGconf.ASIA 2019 - Full-throttle running on Terabytes log-data49 ▌Repository  https://github.com/heterodb/pg-strom  https://heterodb.github.io/swdc/ ▌Documents  http://heterodb.github.io/pg-strom/ ▌Contact  kaigai@heterodb.com  Tw: @kkaigai