SlideShare a Scribd company logo
1 of 24
Download to read offline
PGconf.ASIA Unconference Session
~Partition-wise Join & Visibility-Map~
HeteroDB,Inc
Chief Architect & CEO
KaiGai Kohei <kaigai@heterodb.com>
about HeteroDB
Discussion Materials for Foreign Partners2
Corporate overview
 Name HeteroDB,Inc
 Established 4th-Jul-2017
 Location Shinagawa, Tokyo, Japan
 Businesses Sales of accelerated database product
Technical consulting on GPU&DB region
By the heterogeneous-computing technology on the database area,
we provides a useful, fast and cost-effective data analytics platform
for all the people who need the power of analytics.
CEO Profile
 KaiGai Kohei – He has contributed for PostgreSQL and Linux kernel
development in the OSS community more than ten years, especially,
for security and database federation features of PostgreSQL.
 Award of “Genius Programmer” by IPA MITOH program (2007)
 The top-5 posters finalist at GPU Technology Conference 2017.
Features of RDBMS
✓ High-availability / Clustering
✓ DB administration and backup
✓ Transaction control
✓ BI and visualization
➔ We can use the products that
support PostgreSQL as-is.
Core technology – PG-Strom
PG-Strom: An extension module for PostgreSQL, to accelerate SQL
workloads by the thousands cores and wide-band memory of GPU.
GPU
Big-data Analytics
PG-Strom
NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-3
Mass data loading from
the storage device rapidly
Machine-learning & Statistics
NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-4
SSD-to-GPU Direct SQL Execution
and Visibility-map
A usual composition of x86_64 server
GPUSSD
CPU
RAM
HDD
N/W
NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-5
Data flow to process a massive amount of data
CPU RAM
SSD GPU
PCIe
PostgreSQL
Data Blocks
Normal Data Flow
All the records, including junks, must be loaded
onto RAM once, because software cannot check
necessity of the rows prior to the data loading.
So, amount of the I/O traffic over PCIe bus tends
to be large.
NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-6
Unless records are not loaded to CPU/RAM once, over the PCIe bus,
software cannot check its necessity even if it is “junk”.
Core Feature: SSD-to-GPU Direct SQL
CPU RAM
SSD GPU
PCIe
PostgreSQL
Data Blocks
NVIDIA GPUDirect RDMA
It allows to load the data blocks on NVME-SSD
to GPU using peer-to-peer DMA over PCIe-bus;
bypassing CPU/RAM. WHERE-clause
JOIN
GROUP BY
Run SQL by GPU
to reduce the data size
Data Size: Small
NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-7
v2.0
Element technology - NVIDIA GPUDirect RDMA
Physical
address space
PCIe BAR1 Area
GPU
device
memory
RAM
NVMe-SSD Infiniband
HBA
PCIe device
GPUDirect RDMA
It enables to map GPU device
memory on physical address
space of the host system
Once “physical address of GPU device memory”
appears, we can use is as source or destination
address of DMA with PCIe devices.
NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-8
0xf0000000
0xe0000000
DMA Request
SRC: 1200th sector
LEN: 40 sectors
DST: 0xe0200000
Benchmark Results – single-node version
NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-9
2172.3 2159.6 2158.9 2086.0 2127.2 2104.3
1920.3
2023.4 2101.1 2126.9
1900.0 1960.3
2072.1
6149.4 6279.3 6282.5
5985.6 6055.3 6152.5
5479.3
6051.2 6061.5 6074.2
5813.7 5871.8 5800.1
0
1000
2000
3000
4000
5000
6000
7000
Q1_1 Q1_2 Q1_3 Q2_1 Q2_2 Q2_3 Q3_1 Q3_2 Q3_3 Q3_4 Q4_1 Q4_2 Q4_3
QueryProcessingThroughput[MB/sec]
Star Schema Benchmark on NVMe-SSD + md-raid0
PgSQL9.6(SSDx3) PGStrom2.0(SSDx3) H/W Spec (3xSSD)
SSD-to-GPU Direct SQL pulls out an awesome performance close to the H/W spec
 Measurement by the Star Schema Benchmark; which is a set of typical batch / reporting workloads.
 CPU: Intel Xeon E5-2650v4, RAM: 128GB, GPU: NVIDIA Tesla P40, SSD: Intel 750 (400GB; SeqRead 2.2GB/s)x3
 Size of dataset is 353GB (sf: 401), to ensure I/O bounds workload
SSD-to-GPU Direct SQL - Software Stack
NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-10
Filesystem
(ext4, xfs)
nvme_strom
kernel module
NVMe SSD
NVIDIA Tesla GPU
PostgreSQL
pg_strom
extension
read(2) ioctl(2)
Hardware
Layer
Operating
System
Software
Layer
Database
Software
Layer
blk-mq
nvme pcie nvme rdma
Network HBA
File-offset to NVME-SSD sector
number translation
NVMEoF Target
(JBOF)
NVMe
Request
■ Other software component
■ Our developed software
■ Hardware
NVME over Fabric
nvme_strom v2.0 supports
NVME-over-Fabric (RDMA).
uninitialized
File
BLK-100: unknown
BLK-101: unknown
BLK-102: unknown
BLK-103: unknown
BLK-104: unknown
BLK-105: unknown
BLK-106: unknown
BLK-107: unknown
BLK-108: unknown
BLK-109: unknown
Consistency with disk buffers of PostgreSQL / Linux kernel
NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-11
BLK-100: uncached
BLK-101: cached by PG
BLK-102: uncached
BLK-103: uncached
BLK-104: cached by OS
BLK-105: cached by PG
BLK-106: uncached
BLK-107: uncached
BLK-108: cached by OS
BLK-109: uncached
BLCKSZ
(=8KB)
Transfer Size
Per Request
BLCKSZ *
NChunks
BLK-108: cached by OS
BLK-104: cached by OS
BLK-105: cached by PG
BLK-101: cached by PG
BLK-100: uncached
BLK-102: uncached
BLK-103: uncached
BLK-106: uncached
BLK-107: uncached
BLK-109: uncached
unused
SSD-to-GPU
P2P DMA
Userspace DMA
Buffer (RAM)
Device Memory
(GPU)
CUDA API
(userspace)
cuMemcpyHtoDAsync
① PG-Strom checks PG’s shared buffer of the blocks to read
✓ If block is already cached on the shared buffer of PostgreSQL.
✓ If block may not be all-visible using visibility-map.
② NVME-Strom checks OS’s page cache of the blocks to read
✓ If block is already cached on the page cache of Linux kernel.
✓ But, on fast-SSD mode, page cache shall not be copied unless it is not diry.
③ Then, kicks SSD-to-GPU P2P DMA on the uncached blocks
BLK-108: cached by OS
BLK-104: cached by OS
BLK-105: cached by PG
BLK-101: cached by PG
To be discussed (1)
▌Which case we can read the blocks using SSD-to-GPU Direct SQL
 Block is ALL-VISIBLE
 Block is not ALL-VISIBLE, but we can determine the MVCC visibility without
commit log by t_infomask.
▌zheap will eliminate necessity of the visibility map
▌Solution: A lightweight hook on SetHintBits()
 Extension can make its own structure to manage the state of blocks
asynchronously.
 The hook allows to invalidate once a block becomes not-safe for direct read.
 Other use scenario:
A hybrid data store (row & column) can use the hook for invalidation of the
range which is already transformed to columnar.
Visibility map can be a stable infrastructure?
I/O Path optimization on PCIe-bus
and Partition-wise Join
Consideration for the hardware configuration (1/2)
NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-14
PCIe-switch can make CPU more relaxed.
CPU CPU
PLX
SSD GPU
PLX
SSD GPU
PLX
SSD GPU
PLX
SSD GPU
SCAN SCAN SCAN SCAN
JOIN JOIN JOIN JOIN
GROUP BY GROUP BY GROUP BY GROUP BY
Very small amount of data
GATHER GATHER
Consideration for the hardware configuration (2/2)
HeteroDB Products Brief (1H-2019 edition)15
Supermicro
SYS-4029TRT2
x96 lane
PCIe
switch
x96 lane
PCIe
switch
CPU2 CPU1
QPI
Gen3
x16
Gen3 x16
for each
slot
Gen3 x16
for each
slotGen3
x16
▌HPC Server
▌I/O Expansion Box
NEC ExpEther 40G
(4slots)
Network
Switch
4 slots of
PCIe Gen3 x8
PCIe
Swich
40Gb
Ethernet
CPU
NIC
Extra I/O Boxes
Hardware optimized table partition layout
NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-16
lineorder
lineorder_p0
lineorder_p1
lineorder_p2
reminder=0
reminder=1
reminder=2
customer date
supplier parts
tablespace: nvme0
tablespace: nvme1
tablespace: nvme2
key
INSERT Hashed key
hash = f(key)
hash % 3 = 2
hash % 3 = 0
Raw data
1053GB
Partial data
351GB
Partial data
351GB
Partial data
351GB
Partition leaf for each I/O expansion box on behalf of the tablespaces
Multi GPU/SSD and Partition Configuration (1/2)
HeteroDB Products Brief (1H-2019 edition)17
NEC Express5800/R120g-2m
CPU: Intel Xeon E5-2603 v4 (6C, 1.7GHz)
RAM: 64GB
OS: Red Hat Enterprise Linux 7
(kernel: 3.10.0-862.9.1.el7.x86_64)
CUDA-9.2.148 + driver 396.44
DB: PostgreSQL 11beta3 + PG-Strom v2.1devel
lineorder_a
(351GB)
lineorder_b
(351GB)
lineorder_c
(351GB)
NEC ExpEther (40Gb; 4slots版)
I/F: PCIe 3.0 x8 (x16幅) x4スロット
+ internal PCIe switch
N/W: 40Gb-ethernet
NVIDIA Tesla P40
# of cores: 3840 (1.3GHz)
Device RAM: 24GB (347GB/s, GDDR5)
CC: 6.1 (Pascal, GP104)
I/F: PCIe 3.0 x16
Intel DC P4600 (2.0TB; HHHL)
SeqRead: 3200MB/s
SeqWrite: 1575MB/s
RandRead: 610k IOPS
RandWrite: 196k IOPS
I/F: PCIe 3.0 x4
customer date
supplier parts
Multi GPU/SSD and Partition Configuration (2/2) - Benchmark
NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-18
 13 SSBM queries to 1055GB database in total (a.k.a 351GB per I/O expansion box)
 Raw I/O data transfer without SQL execution was up to 9GB/s.
In other words, SQL execution was faster than simple storage read with raw-I/O.
2,388 2,477 2,493 2,502 2,739 2,831
1,865
2,268 2,442 2,418
1,789 1,848
2,202
13,401 13,534 13,536 13,330
12,696
12,965
12,533
11,498
12,312 12,419 12,414 12,622 12,594
0
2,000
4,000
6,000
8,000
10,000
12,000
14,000
Q1_1 Q1_2 Q1_3 Q2_1 Q2_2 Q2_3 Q3_1 Q3_2 Q3_3 Q3_4 Q4_1 Q4_2 Q4_3
QueryExecutionThroughput[MB/s]
Star Schema Benchmark for PgSQL v11beta3 / PG-Strom v2.1devel on NEC ExpEther x3
PostgreSQL v11beta3 PG-Strom v2.1devel Raw I/O Limitation
max 13.5GB/s for query execution performance with 3x I/O expansion boxes!!
Issue of Partition-wise Join (1/4)
NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-19
lineorder
lineorder_p0
lineorder_p1
lineorder_p2
reminder=0
reminder=1
reminder=2
customer date
supplier parts
tablespace: nvme0
tablespace: nvme1
tablespace: nvme2
New in PostgreSQL v11: Parallel scan of the partition leafs
Scan
Scan
Scan
Gather
Join
Agg
Query
Results
Scan
Massive records
makes hard to gather
Issue of Partition-wise Join (2/4)
NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-20
lineorder
lineorder_p0
lineorder_p1
lineorder_p2
reminder=0
reminder=1
reminder=2
customer date
supplier parts
tablespace: nvme0
tablespace: nvme1
tablespace: nvme2
Preferable: Gathering the partition-leafs after JOIN / GROUP BY
Join
Gather
Agg
Query
Results
Scan
Scan
PreAgg
Join
Scan
PreAgg
Join
Scan
PreAgg
Issue of Partition-wise Join (3/4)
▌INNER JOIN / RIGHT OUTER JOIN
 It works if planner distribute customer to individual leaf of the lineorder.
 No enhancement on the executor side is needed.
 customer × (lineorder_p0 + lineorder_p1 + lineorder_p2)
= (customer × lineorder_p0) + (customer×lineorder_p1) + (customer×lineorder_p2)
▌LEFT OUTER JOIN / FULL OUTER JOIN
 Left side of the join needs to track unreferenced tuples on a shared structure.
lineorder_p2
customer
lineorder_p1
lineorder_p0
JOIN
Issue of Partition-wise Join (4/4)
NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-22
ssbm =# EXPLAIN SELECT sum(lo_extendedprice*lo_discount) as revenue
FROM lineorder,date1
WHERE lo_orderdate = d_datekey
AND d_year = 1993
AND lo_discount between 1 and 3
AND lo_quantity < 25;
QUERY PLAN
------------------------------------------------------------------------------
Aggregate
-> Gather
Workers Planned: 9
-> Parallel Append
-> Parallel Custom Scan (GpuPreAgg)
Reduction: NoGroup
Combined GpuJoin: enabled
GPU Preference: GPU2 (Tesla P40)
-> Parallel Custom Scan (GpuJoin) on lineorder_p2
Outer Scan: lineorder_p2
Outer Scan Filter: ((lo_discount >= '1'::numeric) AND (lo_discount <= '3'::numeric)
AND (lo_quantity < '25'::numeric))
Depth 1: GpuHashJoin (nrows 102760469...45490403)
HashKeys: lineorder_p2.lo_orderdate
JoinQuals: (lineorder_p2.lo_orderdate = date1.d_datekey)
KDS-Hash (size: 66.03KB)
GPU Preference: GPU2 (Tesla P40)
NVMe-Strom: enabled
-> Seq Scan on date1
Filter: (d_year = 1993)
-> Parallel Custom Scan (GpuPreAgg)
Reduction: NoGroup
Combined GpuJoin: enabled
GPU Preference: GPU1 (Tesla P40)
:
Portion to be executed
on the 3rd I/O expansion box.
To be discussed (2)
① Built-in planner support to construct partition-wise JOIN
② Shared data structure for the distributed INNER tables
lineorder_p2
customer
lineorder_p1
lineorder_p0
Hash of
customer
Hash of
customer
Hash of
customer
A shared structure
other Hash-node
distributed
can reference
PGconf.ASIA Unconference Session - Partition-wise Join & Visibility-Map

More Related Content

What's hot

dCUDA: Distributed GPU Computing with Hardware Overlap
 dCUDA: Distributed GPU Computing with Hardware Overlap dCUDA: Distributed GPU Computing with Hardware Overlap
dCUDA: Distributed GPU Computing with Hardware Overlapinside-BigData.com
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Danielle Womboldt
 
Persistent Memory Programming with Java*
Persistent Memory Programming with Java*Persistent Memory Programming with Java*
Persistent Memory Programming with Java*Intel® Software
 
FDW-based Sharding Update and Future
FDW-based Sharding Update and FutureFDW-based Sharding Update and Future
FDW-based Sharding Update and FutureMasahiko Sawada
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Danielle Womboldt
 
00 opencapi acceleration framework yonglu_ver2
00 opencapi acceleration framework yonglu_ver200 opencapi acceleration framework yonglu_ver2
00 opencapi acceleration framework yonglu_ver2Yutaka Kawai
 
Ceph Day Beijing - Welcome to Beijing Ceph Day
Ceph Day Beijing - Welcome to Beijing Ceph DayCeph Day Beijing - Welcome to Beijing Ceph Day
Ceph Day Beijing - Welcome to Beijing Ceph DayDanielle Womboldt
 
PGConf.ASIA 2019 Bali - Setup a High-Availability and Load Balancing PostgreS...
PGConf.ASIA 2019 Bali - Setup a High-Availability and Load Balancing PostgreS...PGConf.ASIA 2019 Bali - Setup a High-Availability and Load Balancing PostgreS...
PGConf.ASIA 2019 Bali - Setup a High-Availability and Load Balancing PostgreS...Equnix Business Solutions
 
了解IO协议栈
了解IO协议栈了解IO协议栈
了解IO协议栈Feng Yu
 
Transparent Data Encryption in PostgreSQL and Integration with Key Management...
Transparent Data Encryption in PostgreSQL and Integration with Key Management...Transparent Data Encryption in PostgreSQL and Integration with Key Management...
Transparent Data Encryption in PostgreSQL and Integration with Key Management...Masahiko Sawada
 
PGConf.ASIA 2019 - PGSpider High Performance Cluster Engine - Shigeo Hirose
PGConf.ASIA 2019 - PGSpider High Performance Cluster Engine - Shigeo HirosePGConf.ASIA 2019 - PGSpider High Performance Cluster Engine - Shigeo Hirose
PGConf.ASIA 2019 - PGSpider High Performance Cluster Engine - Shigeo HiroseEqunix Business Solutions
 
AMD EPYC 7002 World Records
AMD EPYC 7002 World RecordsAMD EPYC 7002 World Records
AMD EPYC 7002 World RecordsAMD
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing GuideJose De La Rosa
 
Delivering a new level of visual performance in an SoC AMD "Raven Ridge" APU
Delivering a new level of visual performance in an SoC AMD "Raven Ridge" APUDelivering a new level of visual performance in an SoC AMD "Raven Ridge" APU
Delivering a new level of visual performance in an SoC AMD "Raven Ridge" APUAMD
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Community
 
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28AMD and the new “Zen” High Performance x86 Core at Hot Chips 28
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28AMD
 

What's hot (19)

dCUDA: Distributed GPU Computing with Hardware Overlap
 dCUDA: Distributed GPU Computing with Hardware Overlap dCUDA: Distributed GPU Computing with Hardware Overlap
dCUDA: Distributed GPU Computing with Hardware Overlap
 
POWER9 for AI & HPC
POWER9 for AI & HPCPOWER9 for AI & HPC
POWER9 for AI & HPC
 
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
 
Persistent Memory Programming with Java*
Persistent Memory Programming with Java*Persistent Memory Programming with Java*
Persistent Memory Programming with Java*
 
FDW-based Sharding Update and Future
FDW-based Sharding Update and FutureFDW-based Sharding Update and Future
FDW-based Sharding Update and Future
 
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
Ceph Day Beijing - Our journey to high performance large scale Ceph cluster a...
 
00 opencapi acceleration framework yonglu_ver2
00 opencapi acceleration framework yonglu_ver200 opencapi acceleration framework yonglu_ver2
00 opencapi acceleration framework yonglu_ver2
 
Ceph Day Beijing - Welcome to Beijing Ceph Day
Ceph Day Beijing - Welcome to Beijing Ceph DayCeph Day Beijing - Welcome to Beijing Ceph Day
Ceph Day Beijing - Welcome to Beijing Ceph Day
 
PGConf.ASIA 2019 Bali - Setup a High-Availability and Load Balancing PostgreS...
PGConf.ASIA 2019 Bali - Setup a High-Availability and Load Balancing PostgreS...PGConf.ASIA 2019 Bali - Setup a High-Availability and Load Balancing PostgreS...
PGConf.ASIA 2019 Bali - Setup a High-Availability and Load Balancing PostgreS...
 
了解IO协议栈
了解IO协议栈了解IO协议栈
了解IO协议栈
 
Transparent Data Encryption in PostgreSQL and Integration with Key Management...
Transparent Data Encryption in PostgreSQL and Integration with Key Management...Transparent Data Encryption in PostgreSQL and Integration with Key Management...
Transparent Data Encryption in PostgreSQL and Integration with Key Management...
 
PGConf.ASIA 2019 - PGSpider High Performance Cluster Engine - Shigeo Hirose
PGConf.ASIA 2019 - PGSpider High Performance Cluster Engine - Shigeo HirosePGConf.ASIA 2019 - PGSpider High Performance Cluster Engine - Shigeo Hirose
PGConf.ASIA 2019 - PGSpider High Performance Cluster Engine - Shigeo Hirose
 
AMD EPYC 7002 World Records
AMD EPYC 7002 World RecordsAMD EPYC 7002 World Records
AMD EPYC 7002 World Records
 
Ceph Performance and Sizing Guide
Ceph Performance and Sizing GuideCeph Performance and Sizing Guide
Ceph Performance and Sizing Guide
 
Delivering a new level of visual performance in an SoC AMD "Raven Ridge" APU
Delivering a new level of visual performance in an SoC AMD "Raven Ridge" APUDelivering a new level of visual performance in an SoC AMD "Raven Ridge" APU
Delivering a new level of visual performance in an SoC AMD "Raven Ridge" APU
 
Upgrading HDFS to 3.3.0 and deploying RBF in production #LINE_DM
Upgrading HDFS to 3.3.0 and deploying RBF in production #LINE_DMUpgrading HDFS to 3.3.0 and deploying RBF in production #LINE_DM
Upgrading HDFS to 3.3.0 and deploying RBF in production #LINE_DM
 
Hp Integrity Servers
Hp Integrity ServersHp Integrity Servers
Hp Integrity Servers
 
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
Ceph Day Seoul - AFCeph: SKT Scale Out Storage Ceph
 
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28AMD and the new “Zen” High Performance x86 Core at Hot Chips 28
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28
 

Similar to PGconf.ASIA Unconference Session - Partition-wise Join & Visibility-Map

20170602_OSSummit_an_intelligent_storage
20170602_OSSummit_an_intelligent_storage20170602_OSSummit_an_intelligent_storage
20170602_OSSummit_an_intelligent_storageKohei KaiGai
 
20181116 Massive Log Processing using I/O optimized PostgreSQL
20181116 Massive Log Processing using I/O optimized PostgreSQL20181116 Massive Log Processing using I/O optimized PostgreSQL
20181116 Massive Log Processing using I/O optimized PostgreSQLKohei KaiGai
 
20190909_PGconf.ASIA_KaiGai
20190909_PGconf.ASIA_KaiGai20190909_PGconf.ASIA_KaiGai
20190909_PGconf.ASIA_KaiGaiKohei KaiGai
 
PGConf.ASIA 2019 Bali - Full-throttle Running on Terabytes Log-data - Kohei K...
PGConf.ASIA 2019 Bali - Full-throttle Running on Terabytes Log-data - Kohei K...PGConf.ASIA 2019 Bali - Full-throttle Running on Terabytes Log-data - Kohei K...
PGConf.ASIA 2019 Bali - Full-throttle Running on Terabytes Log-data - Kohei K...Equnix Business Solutions
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red_Hat_Storage
 
Advancing GPU Analytics with RAPIDS Accelerator for Spark and Alluxio
Advancing GPU Analytics with RAPIDS Accelerator for Spark and AlluxioAdvancing GPU Analytics with RAPIDS Accelerator for Spark and Alluxio
Advancing GPU Analytics with RAPIDS Accelerator for Spark and AlluxioAlluxio, Inc.
 
BladeCenter GPU Expansion Blade (BGE) - Client Presentation
BladeCenter GPU Expansion Blade (BGE) - Client PresentationBladeCenter GPU Expansion Blade (BGE) - Client Presentation
BladeCenter GPU Expansion Blade (BGE) - Client PresentationCliff Kinard
 
20180920_DBTS_PGStrom_EN
20180920_DBTS_PGStrom_EN20180920_DBTS_PGStrom_EN
20180920_DBTS_PGStrom_ENKohei KaiGai
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...In-Memory Computing Summit
 
April 2014 IBM announcement webcast
April 2014 IBM announcement webcastApril 2014 IBM announcement webcast
April 2014 IBM announcement webcastHELP400
 
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...Shuquan Huang
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephDanielle Womboldt
 
Ceph Day Beijing - SPDK in Ceph
Ceph Day Beijing - SPDK in CephCeph Day Beijing - SPDK in Ceph
Ceph Day Beijing - SPDK in CephCeph Community
 
QCon2016--Drive Best Spark Performance on AI
QCon2016--Drive Best Spark Performance on AIQCon2016--Drive Best Spark Performance on AI
QCon2016--Drive Best Spark Performance on AILex Yu
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Community
 
CETH for XDP [Linux Meetup Santa Clara | July 2016]
CETH for XDP [Linux Meetup Santa Clara | July 2016] CETH for XDP [Linux Meetup Santa Clara | July 2016]
CETH for XDP [Linux Meetup Santa Clara | July 2016] IO Visor Project
 
組み込みから HPC まで ARM コアで実現するエコシステム
組み込みから HPC まで ARM コアで実現するエコシステム組み込みから HPC まで ARM コアで実現するエコシステム
組み込みから HPC まで ARM コアで実現するエコシステムShinnosuke Furuya
 
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI ConvergenceDAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergenceinside-BigData.com
 

Similar to PGconf.ASIA Unconference Session - Partition-wise Join & Visibility-Map (20)

20170602_OSSummit_an_intelligent_storage
20170602_OSSummit_an_intelligent_storage20170602_OSSummit_an_intelligent_storage
20170602_OSSummit_an_intelligent_storage
 
20181116 Massive Log Processing using I/O optimized PostgreSQL
20181116 Massive Log Processing using I/O optimized PostgreSQL20181116 Massive Log Processing using I/O optimized PostgreSQL
20181116 Massive Log Processing using I/O optimized PostgreSQL
 
20190909_PGconf.ASIA_KaiGai
20190909_PGconf.ASIA_KaiGai20190909_PGconf.ASIA_KaiGai
20190909_PGconf.ASIA_KaiGai
 
PGConf.ASIA 2019 Bali - Full-throttle Running on Terabytes Log-data - Kohei K...
PGConf.ASIA 2019 Bali - Full-throttle Running on Terabytes Log-data - Kohei K...PGConf.ASIA 2019 Bali - Full-throttle Running on Terabytes Log-data - Kohei K...
PGConf.ASIA 2019 Bali - Full-throttle Running on Terabytes Log-data - Kohei K...
 
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
 
Advancing GPU Analytics with RAPIDS Accelerator for Spark and Alluxio
Advancing GPU Analytics with RAPIDS Accelerator for Spark and AlluxioAdvancing GPU Analytics with RAPIDS Accelerator for Spark and Alluxio
Advancing GPU Analytics with RAPIDS Accelerator for Spark and Alluxio
 
BladeCenter GPU Expansion Blade (BGE) - Client Presentation
BladeCenter GPU Expansion Blade (BGE) - Client PresentationBladeCenter GPU Expansion Blade (BGE) - Client Presentation
BladeCenter GPU Expansion Blade (BGE) - Client Presentation
 
20180920_DBTS_PGStrom_EN
20180920_DBTS_PGStrom_EN20180920_DBTS_PGStrom_EN
20180920_DBTS_PGStrom_EN
 
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
IMCSummit 2015 - Day 1 Developer Track - Evolution of non-volatile memory exp...
 
April 2014 IBM announcement webcast
April 2014 IBM announcement webcastApril 2014 IBM announcement webcast
April 2014 IBM announcement webcast
 
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
 
Ceph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for CephCeph Day Beijing - SPDK for Ceph
Ceph Day Beijing - SPDK for Ceph
 
Ceph Day Beijing - SPDK in Ceph
Ceph Day Beijing - SPDK in CephCeph Day Beijing - SPDK in Ceph
Ceph Day Beijing - SPDK in Ceph
 
Advances in GPU Computing
Advances in GPU ComputingAdvances in GPU Computing
Advances in GPU Computing
 
QCon2016--Drive Best Spark Performance on AI
QCon2016--Drive Best Spark Performance on AIQCon2016--Drive Best Spark Performance on AI
QCon2016--Drive Best Spark Performance on AI
 
Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK Ceph Day Taipei - Accelerate Ceph via SPDK
Ceph Day Taipei - Accelerate Ceph via SPDK
 
NWU and HPC
NWU and HPCNWU and HPC
NWU and HPC
 
CETH for XDP [Linux Meetup Santa Clara | July 2016]
CETH for XDP [Linux Meetup Santa Clara | July 2016] CETH for XDP [Linux Meetup Santa Clara | July 2016]
CETH for XDP [Linux Meetup Santa Clara | July 2016]
 
組み込みから HPC まで ARM コアで実現するエコシステム
組み込みから HPC まで ARM コアで実現するエコシステム組み込みから HPC まで ARM コアで実現するエコシステム
組み込みから HPC まで ARM コアで実現するエコシステム
 
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI ConvergenceDAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
DAOS - Scale-Out Software-Defined Storage for HPC/Big Data/AI Convergence
 

More from Kohei KaiGai

20221116_DBTS_PGStrom_History
20221116_DBTS_PGStrom_History20221116_DBTS_PGStrom_History
20221116_DBTS_PGStrom_HistoryKohei KaiGai
 
20221111_JPUG_CustomScan_API
20221111_JPUG_CustomScan_API20221111_JPUG_CustomScan_API
20221111_JPUG_CustomScan_APIKohei KaiGai
 
20211112_jpugcon_gpu_and_arrow
20211112_jpugcon_gpu_and_arrow20211112_jpugcon_gpu_and_arrow
20211112_jpugcon_gpu_and_arrowKohei KaiGai
 
20210928_pgunconf_hll_count
20210928_pgunconf_hll_count20210928_pgunconf_hll_count
20210928_pgunconf_hll_countKohei KaiGai
 
20210731_OSC_Kyoto_PGStrom3.0
20210731_OSC_Kyoto_PGStrom3.020210731_OSC_Kyoto_PGStrom3.0
20210731_OSC_Kyoto_PGStrom3.0Kohei KaiGai
 
20210511_PGStrom_GpuCache
20210511_PGStrom_GpuCache20210511_PGStrom_GpuCache
20210511_PGStrom_GpuCacheKohei KaiGai
 
20210301_PGconf_Online_GPU_PostGIS_GiST_Index
20210301_PGconf_Online_GPU_PostGIS_GiST_Index20210301_PGconf_Online_GPU_PostGIS_GiST_Index
20210301_PGconf_Online_GPU_PostGIS_GiST_IndexKohei KaiGai
 
20201128_OSC_Fukuoka_Online_GPUPostGIS
20201128_OSC_Fukuoka_Online_GPUPostGIS20201128_OSC_Fukuoka_Online_GPUPostGIS
20201128_OSC_Fukuoka_Online_GPUPostGISKohei KaiGai
 
20201113_PGconf_Japan_GPU_PostGIS
20201113_PGconf_Japan_GPU_PostGIS20201113_PGconf_Japan_GPU_PostGIS
20201113_PGconf_Japan_GPU_PostGISKohei KaiGai
 
20200828_OSCKyoto_Online
20200828_OSCKyoto_Online20200828_OSCKyoto_Online
20200828_OSCKyoto_OnlineKohei KaiGai
 
20200806_PGStrom_PostGIS_GstoreFdw
20200806_PGStrom_PostGIS_GstoreFdw20200806_PGStrom_PostGIS_GstoreFdw
20200806_PGStrom_PostGIS_GstoreFdwKohei KaiGai
 
20200424_Writable_Arrow_Fdw
20200424_Writable_Arrow_Fdw20200424_Writable_Arrow_Fdw
20200424_Writable_Arrow_FdwKohei KaiGai
 
20191211_Apache_Arrow_Meetup_Tokyo
20191211_Apache_Arrow_Meetup_Tokyo20191211_Apache_Arrow_Meetup_Tokyo
20191211_Apache_Arrow_Meetup_TokyoKohei KaiGai
 
20191115-PGconf.Japan
20191115-PGconf.Japan20191115-PGconf.Japan
20191115-PGconf.JapanKohei KaiGai
 
20190926_Try_RHEL8_NVMEoF_Beta
20190926_Try_RHEL8_NVMEoF_Beta20190926_Try_RHEL8_NVMEoF_Beta
20190926_Try_RHEL8_NVMEoF_BetaKohei KaiGai
 
20190925_DBTS_PGStrom
20190925_DBTS_PGStrom20190925_DBTS_PGStrom
20190925_DBTS_PGStromKohei KaiGai
 
20190516_DLC10_PGStrom
20190516_DLC10_PGStrom20190516_DLC10_PGStrom
20190516_DLC10_PGStromKohei KaiGai
 
20190418_PGStrom_on_ArrowFdw
20190418_PGStrom_on_ArrowFdw20190418_PGStrom_on_ArrowFdw
20190418_PGStrom_on_ArrowFdwKohei KaiGai
 
20190314 PGStrom Arrow_Fdw
20190314 PGStrom Arrow_Fdw20190314 PGStrom Arrow_Fdw
20190314 PGStrom Arrow_FdwKohei KaiGai
 
20181212 - PGconfASIA - LT - English
20181212 - PGconfASIA - LT - English20181212 - PGconfASIA - LT - English
20181212 - PGconfASIA - LT - EnglishKohei KaiGai
 

More from Kohei KaiGai (20)

20221116_DBTS_PGStrom_History
20221116_DBTS_PGStrom_History20221116_DBTS_PGStrom_History
20221116_DBTS_PGStrom_History
 
20221111_JPUG_CustomScan_API
20221111_JPUG_CustomScan_API20221111_JPUG_CustomScan_API
20221111_JPUG_CustomScan_API
 
20211112_jpugcon_gpu_and_arrow
20211112_jpugcon_gpu_and_arrow20211112_jpugcon_gpu_and_arrow
20211112_jpugcon_gpu_and_arrow
 
20210928_pgunconf_hll_count
20210928_pgunconf_hll_count20210928_pgunconf_hll_count
20210928_pgunconf_hll_count
 
20210731_OSC_Kyoto_PGStrom3.0
20210731_OSC_Kyoto_PGStrom3.020210731_OSC_Kyoto_PGStrom3.0
20210731_OSC_Kyoto_PGStrom3.0
 
20210511_PGStrom_GpuCache
20210511_PGStrom_GpuCache20210511_PGStrom_GpuCache
20210511_PGStrom_GpuCache
 
20210301_PGconf_Online_GPU_PostGIS_GiST_Index
20210301_PGconf_Online_GPU_PostGIS_GiST_Index20210301_PGconf_Online_GPU_PostGIS_GiST_Index
20210301_PGconf_Online_GPU_PostGIS_GiST_Index
 
20201128_OSC_Fukuoka_Online_GPUPostGIS
20201128_OSC_Fukuoka_Online_GPUPostGIS20201128_OSC_Fukuoka_Online_GPUPostGIS
20201128_OSC_Fukuoka_Online_GPUPostGIS
 
20201113_PGconf_Japan_GPU_PostGIS
20201113_PGconf_Japan_GPU_PostGIS20201113_PGconf_Japan_GPU_PostGIS
20201113_PGconf_Japan_GPU_PostGIS
 
20200828_OSCKyoto_Online
20200828_OSCKyoto_Online20200828_OSCKyoto_Online
20200828_OSCKyoto_Online
 
20200806_PGStrom_PostGIS_GstoreFdw
20200806_PGStrom_PostGIS_GstoreFdw20200806_PGStrom_PostGIS_GstoreFdw
20200806_PGStrom_PostGIS_GstoreFdw
 
20200424_Writable_Arrow_Fdw
20200424_Writable_Arrow_Fdw20200424_Writable_Arrow_Fdw
20200424_Writable_Arrow_Fdw
 
20191211_Apache_Arrow_Meetup_Tokyo
20191211_Apache_Arrow_Meetup_Tokyo20191211_Apache_Arrow_Meetup_Tokyo
20191211_Apache_Arrow_Meetup_Tokyo
 
20191115-PGconf.Japan
20191115-PGconf.Japan20191115-PGconf.Japan
20191115-PGconf.Japan
 
20190926_Try_RHEL8_NVMEoF_Beta
20190926_Try_RHEL8_NVMEoF_Beta20190926_Try_RHEL8_NVMEoF_Beta
20190926_Try_RHEL8_NVMEoF_Beta
 
20190925_DBTS_PGStrom
20190925_DBTS_PGStrom20190925_DBTS_PGStrom
20190925_DBTS_PGStrom
 
20190516_DLC10_PGStrom
20190516_DLC10_PGStrom20190516_DLC10_PGStrom
20190516_DLC10_PGStrom
 
20190418_PGStrom_on_ArrowFdw
20190418_PGStrom_on_ArrowFdw20190418_PGStrom_on_ArrowFdw
20190418_PGStrom_on_ArrowFdw
 
20190314 PGStrom Arrow_Fdw
20190314 PGStrom Arrow_Fdw20190314 PGStrom Arrow_Fdw
20190314 PGStrom Arrow_Fdw
 
20181212 - PGconfASIA - LT - English
20181212 - PGconfASIA - LT - English20181212 - PGconfASIA - LT - English
20181212 - PGconfASIA - LT - English
 

Recently uploaded

MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based projectAnoyGreter
 
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...Natan Silnitsky
 
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company OdishaBalasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odishasmiwainfosol
 
CRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceCRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceBrainSell Technologies
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Matt Ray
 
Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...
Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...
Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...OnePlan Solutions
 
Cyber security and its impact on E commerce
Cyber security and its impact on E commerceCyber security and its impact on E commerce
Cyber security and its impact on E commercemanigoyal112
 
What are the key points to focus on before starting to learn ETL Development....
What are the key points to focus on before starting to learn ETL Development....What are the key points to focus on before starting to learn ETL Development....
What are the key points to focus on before starting to learn ETL Development....kzayra69
 
英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作qr0udbr0
 
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanySuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanyChristoph Pohl
 
PREDICTING RIVER WATER QUALITY ppt presentation
PREDICTING  RIVER  WATER QUALITY  ppt presentationPREDICTING  RIVER  WATER QUALITY  ppt presentation
PREDICTING RIVER WATER QUALITY ppt presentationvaddepallysandeep122
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEEVICTOR MAESTRE RAMIREZ
 
Introduction Computer Science - Software Design.pdf
Introduction Computer Science - Software Design.pdfIntroduction Computer Science - Software Design.pdf
Introduction Computer Science - Software Design.pdfFerryKemperman
 
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...OnePlan Solutions
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Velvetech LLC
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesŁukasz Chruściel
 
Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)Ahmed Mater
 
React Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaReact Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaHanief Utama
 
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)jennyeacort
 
Odoo 14 - eLearning Module In Odoo 14 Enterprise
Odoo 14 - eLearning Module In Odoo 14 EnterpriseOdoo 14 - eLearning Module In Odoo 14 Enterprise
Odoo 14 - eLearning Module In Odoo 14 Enterprisepreethippts
 

Recently uploaded (20)

MYjobs Presentation Django-based project
MYjobs Presentation Django-based projectMYjobs Presentation Django-based project
MYjobs Presentation Django-based project
 
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
Taming Distributed Systems: Key Insights from Wix's Large-Scale Experience - ...
 
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company OdishaBalasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
Balasore Best It Company|| Top 10 IT Company || Balasore Software company Odisha
 
CRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. SalesforceCRM Contender Series: HubSpot vs. Salesforce
CRM Contender Series: HubSpot vs. Salesforce
 
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
Open Source Summit NA 2024: Open Source Cloud Costs - OpenCost's Impact on En...
 
Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...
Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...
Tech Tuesday - Mastering Time Management Unlock the Power of OnePlan's Timesh...
 
Cyber security and its impact on E commerce
Cyber security and its impact on E commerceCyber security and its impact on E commerce
Cyber security and its impact on E commerce
 
What are the key points to focus on before starting to learn ETL Development....
What are the key points to focus on before starting to learn ETL Development....What are the key points to focus on before starting to learn ETL Development....
What are the key points to focus on before starting to learn ETL Development....
 
英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作
 
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanySuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
 
PREDICTING RIVER WATER QUALITY ppt presentation
PREDICTING  RIVER  WATER QUALITY  ppt presentationPREDICTING  RIVER  WATER QUALITY  ppt presentation
PREDICTING RIVER WATER QUALITY ppt presentation
 
Cloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEECloud Data Center Network Construction - IEEE
Cloud Data Center Network Construction - IEEE
 
Introduction Computer Science - Software Design.pdf
Introduction Computer Science - Software Design.pdfIntroduction Computer Science - Software Design.pdf
Introduction Computer Science - Software Design.pdf
 
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
Maximizing Efficiency and Profitability with OnePlan’s Professional Service A...
 
Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...Software Project Health Check: Best Practices and Techniques for Your Product...
Software Project Health Check: Best Practices and Techniques for Your Product...
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New Features
 
Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)Ahmed Motair CV April 2024 (Senior SW Developer)
Ahmed Motair CV April 2024 (Senior SW Developer)
 
React Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief UtamaReact Server Component in Next.js by Hanief Utama
React Server Component in Next.js by Hanief Utama
 
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
 
Odoo 14 - eLearning Module In Odoo 14 Enterprise
Odoo 14 - eLearning Module In Odoo 14 EnterpriseOdoo 14 - eLearning Module In Odoo 14 Enterprise
Odoo 14 - eLearning Module In Odoo 14 Enterprise
 

PGconf.ASIA Unconference Session - Partition-wise Join & Visibility-Map

  • 1. PGconf.ASIA Unconference Session ~Partition-wise Join & Visibility-Map~ HeteroDB,Inc Chief Architect & CEO KaiGai Kohei <kaigai@heterodb.com>
  • 2. about HeteroDB Discussion Materials for Foreign Partners2 Corporate overview  Name HeteroDB,Inc  Established 4th-Jul-2017  Location Shinagawa, Tokyo, Japan  Businesses Sales of accelerated database product Technical consulting on GPU&DB region By the heterogeneous-computing technology on the database area, we provides a useful, fast and cost-effective data analytics platform for all the people who need the power of analytics. CEO Profile  KaiGai Kohei – He has contributed for PostgreSQL and Linux kernel development in the OSS community more than ten years, especially, for security and database federation features of PostgreSQL.  Award of “Genius Programmer” by IPA MITOH program (2007)  The top-5 posters finalist at GPU Technology Conference 2017.
  • 3. Features of RDBMS ✓ High-availability / Clustering ✓ DB administration and backup ✓ Transaction control ✓ BI and visualization ➔ We can use the products that support PostgreSQL as-is. Core technology – PG-Strom PG-Strom: An extension module for PostgreSQL, to accelerate SQL workloads by the thousands cores and wide-band memory of GPU. GPU Big-data Analytics PG-Strom NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-3 Mass data loading from the storage device rapidly Machine-learning & Statistics
  • 4. NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-4 SSD-to-GPU Direct SQL Execution and Visibility-map
  • 5. A usual composition of x86_64 server GPUSSD CPU RAM HDD N/W NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-5
  • 6. Data flow to process a massive amount of data CPU RAM SSD GPU PCIe PostgreSQL Data Blocks Normal Data Flow All the records, including junks, must be loaded onto RAM once, because software cannot check necessity of the rows prior to the data loading. So, amount of the I/O traffic over PCIe bus tends to be large. NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-6 Unless records are not loaded to CPU/RAM once, over the PCIe bus, software cannot check its necessity even if it is “junk”.
  • 7. Core Feature: SSD-to-GPU Direct SQL CPU RAM SSD GPU PCIe PostgreSQL Data Blocks NVIDIA GPUDirect RDMA It allows to load the data blocks on NVME-SSD to GPU using peer-to-peer DMA over PCIe-bus; bypassing CPU/RAM. WHERE-clause JOIN GROUP BY Run SQL by GPU to reduce the data size Data Size: Small NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-7 v2.0
  • 8. Element technology - NVIDIA GPUDirect RDMA Physical address space PCIe BAR1 Area GPU device memory RAM NVMe-SSD Infiniband HBA PCIe device GPUDirect RDMA It enables to map GPU device memory on physical address space of the host system Once “physical address of GPU device memory” appears, we can use is as source or destination address of DMA with PCIe devices. NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-8 0xf0000000 0xe0000000 DMA Request SRC: 1200th sector LEN: 40 sectors DST: 0xe0200000
  • 9. Benchmark Results – single-node version NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-9 2172.3 2159.6 2158.9 2086.0 2127.2 2104.3 1920.3 2023.4 2101.1 2126.9 1900.0 1960.3 2072.1 6149.4 6279.3 6282.5 5985.6 6055.3 6152.5 5479.3 6051.2 6061.5 6074.2 5813.7 5871.8 5800.1 0 1000 2000 3000 4000 5000 6000 7000 Q1_1 Q1_2 Q1_3 Q2_1 Q2_2 Q2_3 Q3_1 Q3_2 Q3_3 Q3_4 Q4_1 Q4_2 Q4_3 QueryProcessingThroughput[MB/sec] Star Schema Benchmark on NVMe-SSD + md-raid0 PgSQL9.6(SSDx3) PGStrom2.0(SSDx3) H/W Spec (3xSSD) SSD-to-GPU Direct SQL pulls out an awesome performance close to the H/W spec  Measurement by the Star Schema Benchmark; which is a set of typical batch / reporting workloads.  CPU: Intel Xeon E5-2650v4, RAM: 128GB, GPU: NVIDIA Tesla P40, SSD: Intel 750 (400GB; SeqRead 2.2GB/s)x3  Size of dataset is 353GB (sf: 401), to ensure I/O bounds workload
  • 10. SSD-to-GPU Direct SQL - Software Stack NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-10 Filesystem (ext4, xfs) nvme_strom kernel module NVMe SSD NVIDIA Tesla GPU PostgreSQL pg_strom extension read(2) ioctl(2) Hardware Layer Operating System Software Layer Database Software Layer blk-mq nvme pcie nvme rdma Network HBA File-offset to NVME-SSD sector number translation NVMEoF Target (JBOF) NVMe Request ■ Other software component ■ Our developed software ■ Hardware NVME over Fabric nvme_strom v2.0 supports NVME-over-Fabric (RDMA).
  • 11. uninitialized File BLK-100: unknown BLK-101: unknown BLK-102: unknown BLK-103: unknown BLK-104: unknown BLK-105: unknown BLK-106: unknown BLK-107: unknown BLK-108: unknown BLK-109: unknown Consistency with disk buffers of PostgreSQL / Linux kernel NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-11 BLK-100: uncached BLK-101: cached by PG BLK-102: uncached BLK-103: uncached BLK-104: cached by OS BLK-105: cached by PG BLK-106: uncached BLK-107: uncached BLK-108: cached by OS BLK-109: uncached BLCKSZ (=8KB) Transfer Size Per Request BLCKSZ * NChunks BLK-108: cached by OS BLK-104: cached by OS BLK-105: cached by PG BLK-101: cached by PG BLK-100: uncached BLK-102: uncached BLK-103: uncached BLK-106: uncached BLK-107: uncached BLK-109: uncached unused SSD-to-GPU P2P DMA Userspace DMA Buffer (RAM) Device Memory (GPU) CUDA API (userspace) cuMemcpyHtoDAsync ① PG-Strom checks PG’s shared buffer of the blocks to read ✓ If block is already cached on the shared buffer of PostgreSQL. ✓ If block may not be all-visible using visibility-map. ② NVME-Strom checks OS’s page cache of the blocks to read ✓ If block is already cached on the page cache of Linux kernel. ✓ But, on fast-SSD mode, page cache shall not be copied unless it is not diry. ③ Then, kicks SSD-to-GPU P2P DMA on the uncached blocks BLK-108: cached by OS BLK-104: cached by OS BLK-105: cached by PG BLK-101: cached by PG
  • 12. To be discussed (1) ▌Which case we can read the blocks using SSD-to-GPU Direct SQL  Block is ALL-VISIBLE  Block is not ALL-VISIBLE, but we can determine the MVCC visibility without commit log by t_infomask. ▌zheap will eliminate necessity of the visibility map ▌Solution: A lightweight hook on SetHintBits()  Extension can make its own structure to manage the state of blocks asynchronously.  The hook allows to invalidate once a block becomes not-safe for direct read.  Other use scenario: A hybrid data store (row & column) can use the hook for invalidation of the range which is already transformed to columnar. Visibility map can be a stable infrastructure?
  • 13. I/O Path optimization on PCIe-bus and Partition-wise Join
  • 14. Consideration for the hardware configuration (1/2) NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-14 PCIe-switch can make CPU more relaxed. CPU CPU PLX SSD GPU PLX SSD GPU PLX SSD GPU PLX SSD GPU SCAN SCAN SCAN SCAN JOIN JOIN JOIN JOIN GROUP BY GROUP BY GROUP BY GROUP BY Very small amount of data GATHER GATHER
  • 15. Consideration for the hardware configuration (2/2) HeteroDB Products Brief (1H-2019 edition)15 Supermicro SYS-4029TRT2 x96 lane PCIe switch x96 lane PCIe switch CPU2 CPU1 QPI Gen3 x16 Gen3 x16 for each slot Gen3 x16 for each slotGen3 x16 ▌HPC Server ▌I/O Expansion Box NEC ExpEther 40G (4slots) Network Switch 4 slots of PCIe Gen3 x8 PCIe Swich 40Gb Ethernet CPU NIC Extra I/O Boxes
  • 16. Hardware optimized table partition layout NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-16 lineorder lineorder_p0 lineorder_p1 lineorder_p2 reminder=0 reminder=1 reminder=2 customer date supplier parts tablespace: nvme0 tablespace: nvme1 tablespace: nvme2 key INSERT Hashed key hash = f(key) hash % 3 = 2 hash % 3 = 0 Raw data 1053GB Partial data 351GB Partial data 351GB Partial data 351GB Partition leaf for each I/O expansion box on behalf of the tablespaces
  • 17. Multi GPU/SSD and Partition Configuration (1/2) HeteroDB Products Brief (1H-2019 edition)17 NEC Express5800/R120g-2m CPU: Intel Xeon E5-2603 v4 (6C, 1.7GHz) RAM: 64GB OS: Red Hat Enterprise Linux 7 (kernel: 3.10.0-862.9.1.el7.x86_64) CUDA-9.2.148 + driver 396.44 DB: PostgreSQL 11beta3 + PG-Strom v2.1devel lineorder_a (351GB) lineorder_b (351GB) lineorder_c (351GB) NEC ExpEther (40Gb; 4slots版) I/F: PCIe 3.0 x8 (x16幅) x4スロット + internal PCIe switch N/W: 40Gb-ethernet NVIDIA Tesla P40 # of cores: 3840 (1.3GHz) Device RAM: 24GB (347GB/s, GDDR5) CC: 6.1 (Pascal, GP104) I/F: PCIe 3.0 x16 Intel DC P4600 (2.0TB; HHHL) SeqRead: 3200MB/s SeqWrite: 1575MB/s RandRead: 610k IOPS RandWrite: 196k IOPS I/F: PCIe 3.0 x4 customer date supplier parts
  • 18. Multi GPU/SSD and Partition Configuration (2/2) - Benchmark NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-18  13 SSBM queries to 1055GB database in total (a.k.a 351GB per I/O expansion box)  Raw I/O data transfer without SQL execution was up to 9GB/s. In other words, SQL execution was faster than simple storage read with raw-I/O. 2,388 2,477 2,493 2,502 2,739 2,831 1,865 2,268 2,442 2,418 1,789 1,848 2,202 13,401 13,534 13,536 13,330 12,696 12,965 12,533 11,498 12,312 12,419 12,414 12,622 12,594 0 2,000 4,000 6,000 8,000 10,000 12,000 14,000 Q1_1 Q1_2 Q1_3 Q2_1 Q2_2 Q2_3 Q3_1 Q3_2 Q3_3 Q3_4 Q4_1 Q4_2 Q4_3 QueryExecutionThroughput[MB/s] Star Schema Benchmark for PgSQL v11beta3 / PG-Strom v2.1devel on NEC ExpEther x3 PostgreSQL v11beta3 PG-Strom v2.1devel Raw I/O Limitation max 13.5GB/s for query execution performance with 3x I/O expansion boxes!!
  • 19. Issue of Partition-wise Join (1/4) NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-19 lineorder lineorder_p0 lineorder_p1 lineorder_p2 reminder=0 reminder=1 reminder=2 customer date supplier parts tablespace: nvme0 tablespace: nvme1 tablespace: nvme2 New in PostgreSQL v11: Parallel scan of the partition leafs Scan Scan Scan Gather Join Agg Query Results Scan Massive records makes hard to gather
  • 20. Issue of Partition-wise Join (2/4) NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-20 lineorder lineorder_p0 lineorder_p1 lineorder_p2 reminder=0 reminder=1 reminder=2 customer date supplier parts tablespace: nvme0 tablespace: nvme1 tablespace: nvme2 Preferable: Gathering the partition-leafs after JOIN / GROUP BY Join Gather Agg Query Results Scan Scan PreAgg Join Scan PreAgg Join Scan PreAgg
  • 21. Issue of Partition-wise Join (3/4) ▌INNER JOIN / RIGHT OUTER JOIN  It works if planner distribute customer to individual leaf of the lineorder.  No enhancement on the executor side is needed.  customer × (lineorder_p0 + lineorder_p1 + lineorder_p2) = (customer × lineorder_p0) + (customer×lineorder_p1) + (customer×lineorder_p2) ▌LEFT OUTER JOIN / FULL OUTER JOIN  Left side of the join needs to track unreferenced tuples on a shared structure. lineorder_p2 customer lineorder_p1 lineorder_p0 JOIN
  • 22. Issue of Partition-wise Join (4/4) NVME and GPU accelerates PostgreSQL beyond the limitation -PGconf.EU 2018-22 ssbm =# EXPLAIN SELECT sum(lo_extendedprice*lo_discount) as revenue FROM lineorder,date1 WHERE lo_orderdate = d_datekey AND d_year = 1993 AND lo_discount between 1 and 3 AND lo_quantity < 25; QUERY PLAN ------------------------------------------------------------------------------ Aggregate -> Gather Workers Planned: 9 -> Parallel Append -> Parallel Custom Scan (GpuPreAgg) Reduction: NoGroup Combined GpuJoin: enabled GPU Preference: GPU2 (Tesla P40) -> Parallel Custom Scan (GpuJoin) on lineorder_p2 Outer Scan: lineorder_p2 Outer Scan Filter: ((lo_discount >= '1'::numeric) AND (lo_discount <= '3'::numeric) AND (lo_quantity < '25'::numeric)) Depth 1: GpuHashJoin (nrows 102760469...45490403) HashKeys: lineorder_p2.lo_orderdate JoinQuals: (lineorder_p2.lo_orderdate = date1.d_datekey) KDS-Hash (size: 66.03KB) GPU Preference: GPU2 (Tesla P40) NVMe-Strom: enabled -> Seq Scan on date1 Filter: (d_year = 1993) -> Parallel Custom Scan (GpuPreAgg) Reduction: NoGroup Combined GpuJoin: enabled GPU Preference: GPU1 (Tesla P40) : Portion to be executed on the 3rd I/O expansion box.
  • 23. To be discussed (2) ① Built-in planner support to construct partition-wise JOIN ② Shared data structure for the distributed INNER tables lineorder_p2 customer lineorder_p1 lineorder_p0 Hash of customer Hash of customer Hash of customer A shared structure other Hash-node distributed can reference