SlideShare uma empresa Scribd logo
1 de 41
Baixar para ler offline
1
Robust HA Solutions - Native Support
for PXC and InnoDB Cluster in
ProxySQL
Marco Tusa
Percona
2
2
• Open source enthusiast
• Principal architect
• Working in DB world over 25 years
• Open source developer and community contributor
About Me
3
What this talk is about
High level description of High Availability
HA solutions
Misses from current vanilla solutions
Where ProxySQL fits
It is NOT about Performance
It is NOT an in depth description of HA or ProxySQL feature
It is NOT an implementation guide
4
• The need and dimension of HA or DR is related to the real
need of your business.
• We are pathologically online/connected, and often we
expect to have over dimensioned HA or DR.
• Business needs often do not require all the components to
be ALWAYS available.
How We Need to think about HA and DR
5
Why We Need HA and DR
Do:
• Business needs
• Technical challenges
• Supportable solutions
• Knowhow
Don’t:
• Choose based on the “shiny object”
• Pick something you know nothing about
• Choose by yourself and push it up or down
• Use shortcuts, to accelerate deploying
time.
The first step to have a robust solution is to design the right solution for
your business.
6
• High Availability is to
cover service
availability in a location
• 10 Gb Ethernet Best case
scenario

400 mt distance Max
HA vs DR
7
• Disaster Recovery is to assure
we can restore service, in a
geographically different location
• Real speed may vary
• Linear distance ~1000Km
HA vs DR
8
HA vs DR
DO NOT MIX
the solutions in your architecture
http://www.tusacentral.com/joomla/index.php/mysql-blogs/204-how-not-to-do-mysql-high-availability-geographic-node-distribution-with-galera-based-replication-misuse
http://www.tusacentral.com/joomla/index.php/mysql-blogs/205-mysql-high-availability-on-premises-a-geographically-distributed-scenario
9
Replicate data is the key - Sync VS Async
1

Data state
3
Different
Data state
10
Data Replication is the Base
Tightly coupled database clusters
• Datacentric approach (single state
of the data, distributed commit)
• Data is consistent in time cross
nodes
• Replication requires high
performant link
• Geographic distribution is
forbidden
• DR is not supported
Loosely coupled database clusters
• Single node approach (local commit)
• Data state differs by node
• Single node state does not affect the
cluster
• Replication link doesn’t need to be
high performance
• Geographic distribution is allow
• DR is supported
11
In MySQL ecosystem we have different solutions
• NDB
• Galera replication
• InnoDB group replication
• Basic Replication
Different solutions
12
Different solutions
13
Good thing:
• Very High availability (five 9s)
• Write scale (for real not a fake)
• Read scale
• Data is in sync all the time
Bad things:
• Too be very efficient should

stay in memory as much as possible
• Complex to manage
• Require deep understanding also from application point of view
Understand solutions: NDB
14
Replication by internal design
Good thing:
• Highly available
• Read scale
• Data is in almost in sync

all the time
Bad things:
• Doesn’t Scale writes
• More nodes more internal

overhead
• Network is a very impacting factor
• 1 Master only (PLEASE!!!!!)
Understand solutions: Galera
15
Understand solutions: Group replication
Replication by standard MySQL
Good thing:
• Highly available
• Read scale
• Data is in almost in sync

all the time
• More network tolerant
Bad things:
• Doesn’t Scale writes
• 1 Master only (PLEASE!!!!!)
When something ends to be a new things?
16
Understand solutions: Replication
Replication by internal design
Good thing:
• Each node is independent
• Read scale
• Low network impact
Bad things:
• Stale reads
• Low HA
• Consistent only in the master
• Each node has its own data state
17
1. No embedded R/W split
• Writes and reads are bound to 1 node
• Failing read channel will cause total failure
2. No recognition of possible stale reads
• In case of replication delay, application can get wrong data
3. No node failure identification and no single entry point
• Architecture must implement a double mechanism to check nodes
status (topology manager) and entry point like VIP
• Application must use a connector that allow multiple entry point in case
of issue. (like Java connector)
What they have in common?
18
1. Embedded R/W split by group of servers (Host Group)
• Writes and reads are automatically redirect to working node(s)
2. Identify stale node with the binary log reader
• In case of replication delay, delayed node will not receive the read
requests
3. Embedded node failure, automatic redirection
• Proxy provide native status recognition for Galera and Group
replication
• Only when using basic replication, you need a topology manager
• Application connect to ProxySQL no need to add VIP
What ProxySQL can do to be better
19
Native health checks
read_only
wsrep_local_recv_queue
wsrep_desync
wsrep_reject_queries
wsrep_sst_donor_rejects_queries
primary_partition
ProxySQL for Galera
• writer_hostgroup: the hostgroup ID that refers to the WRITER
• backup_writer_hostgroup: the hostgoup ID referring to the
Hostgorup that will contain the candidate servers
• reader_hostgroup: The reader Hostgroup ID, containing the list of
servers that need to be taken in consideration
• offline_hostgroup: The Hostgroup ID that will eventually contain the
writer that will be put OFFLINE
• active: True[1]/False[0] if this configuration needs to be used or not
• max_writers: This will contain the MAX number of writers you want
to have at the same time. In a sane setup this should be always 1,
but if you want to have multiple writers, you can define it up to the
number of nodes.
• writer_is_also_reader: If true [1] the Writer will NOT be removed
from the reader HG
• max_transactions_behind: The number
of wsrep_local_recv_queue after which the node will be set
OFFLINE. This must be carefully set, observing the node behaviour.
• comment: I suggest to put some meaningful notes to identify what is
what.
20
I have these nodes
192.168.1.205 (Node1)
192.168.1.21 (Node2)
192.168.1.231 (node3)
ProxySQL for Galera
Will use these groups:
Writer HG-> 100
Reader HG-> 101
BackupW HG-> 102
offHG HG-> 9101
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.205’,100,3306,1000);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.205',101,3306,1000);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.21',101,3306,1000);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.231',101,3306,1000);
insert into mysql_galera_hostgroups (writer_hostgroup,backup_writer_hostgroup,reader_hostgroup,
offline_hostgroup,active,max_writers,writer_is_also_reader,max_transactions_behind) values
(100,102,101,9101,1,1,1,16);
21
ProxySQL for Galera
21
ProxySQL for Galera
21
ProxySQL for Galera
22
ProxySQL for Group Replication
Native health checks
Read_only
Viable_candidate
Transaction_behind
• writer_hostgroup: the hostgroup ID that refers to the WRITER
• backup_writer_hostgroup: the hostgoup ID referring to the
Hostgorup that will contain the candidate servers
• reader_hostgroup: The reader Hostgroup ID, containing the list of
servers that need to be taken in consideration
• offline_hostgroup: The Hostgroup ID that will eventually contain the
writer that will be put OFFLINE
• active: True[1]/False[0] if this configuration needs to be used or not
• max_writers: This will contain the MAX number of writers you want
to have at the same time. In a sane setup this should be always 1,
but if you want to have multiple writers, you can define it up to the
number of nodes.
• writer_is_also_reader: If true [1] the Writer will NOT be removed
from the reader HG
• max_transactions_behind: determines the maximum number of
transactions behind the writers that ProxySQL should allow before
shunning the node to prevent stale reads (this is determined by
querying the transactions_behind field of
the sys.gr_member_routing_candidate_status table in MySQL).
• comment: I suggest to put some meaningful notes to identify what is
what.
23
I have these nodes
192.168.4.55 (Node1)
192.168.4.56 (Node2)
192.168.4.57 (node3)
ProxySQL for Group Replication
Will use these groups:
Writer HG-> 400
Reader HG-> 401
BackupW HG-> 402
offHG HG-> 9401
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.55',400,3306,10000,2000,'GR1');
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.55',401,3306,100,2000,'GR1');
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.56',401,3306,10000,2000,'GR2');
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.57',401,3306,10000,2000,'GR2');
insert into mysql_group_replication_hostgroups (writer_hostgroup,backup_writer_hostgroup,reader_hostgroup,
offline_hostgroup,active,max_writers,writer_is_also_reader,max_transactions_behind)
values (400,402,401,9401,1,1,1,100);
select * from mysql_server_group_replication_log order by 3 desc,1 limit 3 ;
+--------------+------+------------------+-----------------+------------------+-----------+---------------------+-------+
| hostname | port | time_start_us | success_time_us | viable_candidate | read_only | transactions_behind | error |
+--------------+------+------------------+-----------------+------------------+-----------+---------------------+-------+
| 192.168.4.57 | 3306 | 1569593421324355 | 3085 | YES | YES | 0 | NULL |
| 192.168.4.56 | 3306 | 1569593421321825 | 2889 | YES | YES | 0 | NULL |
| 192.168.4.55 | 3306 | 1569593421321435 | 2764 | YES | NO | 0 | NULL |
+--------------+------+------------------+-----------------+------------------+-----------+---------------------+-------+
24
ProxySQL for Group Replication
24
ProxySQL for Group Replication
24
ProxySQL for Group Replication
24
ProxySQL for Group Replication
25
Native health checks
MySQL ping
read_only
Latency
ProxySQL for NDB
Simpler to configure, no replication lag either ON or OFF
delete from mysql_servers where hostgroup_id in (300,301);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment)
VALUES ('10.0.0.107',300,3306,10000,2000,'DC1 writer');
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment)
VALUES ('10.0.0.107',301,3306,10000,2000,'DC1 Reader');
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment)
VALUES ('10.0.0.108',300,3306,10000,2000,'DC1 writer');
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment)
VALUES ('10.0.0.108',301,3306,10000,2000,'DC1 Reader');
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment)
VALUES ('10.0.0.109',300,3306,10000,2000,'DC1 writer');
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment)
VALUES ('10.0.0.109',301,3306,10000,2000,'DC1 Reader');
INSERT INTO mysql_replication_hostgroups VALUES (300,301,'read_only','NDB_cluster');
LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;
Easy to monitor
select b.weight, c.* from stats_mysql_connection_pool c left JOIN runtime_mysql_servers b ON c.hostgroup=b.hostgroup_id
and c.srv_host=b.hostname and c.srv_port = b.port where hostgroup in (300,301) order by hostgroup,srv_host desc;
+--------+-----------+------------+----------+--------+...+------------+
| weight | hostgroup | srv_host | srv_port | status |...| Latency_us |
+--------+-----------+------------+----------+--------+...+------------+
| 10000 | 300 | 10.0.0.109 | 3306 | ONLINE |...| 561 |
| 10000 | 300 | 10.0.0.108 | 3306 | ONLINE |...| 494 |
| 10000 | 300 | 10.0.0.107 | 3306 | ONLINE |...| 457 |
| 10000 | 301 | 10.0.0.109 | 3306 | ONLINE |...| 561 |
| 10000 | 301 | 10.0.0.108 | 3306 | ONLINE |...| 494 |
| 10000 | 301 | 10.0.0.107 | 3306 | ONLINE |...| 457 |
+--------+-----------+------------+----------+--------+...+------------+
26
ProxySQL for NDB
26
ProxySQL for NDB
26
ProxySQL for NDB
27
ProxySQL for MySQL Replication
Native health checks
MySQL ping
read_only
Latency
Stale READS with GTID
delete from mysql_servers where hostgroup_id in (500,501);
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment)
VALUES ('10.0.0.50',500,3306,10000,2000,'DC1 writer');
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment)
VALUES ('10.0.0.50',501,3306,10000,2000,'DC1 Reader');
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment)
VALUES ('10.0.0.51',501,3306,10000,2000,'DC1 Reader');
INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment)
VALUES ('10.0.0.52',501,3306,10000,2000,'DC1 Reader');
INSERT INTO mysql_replication_hostgroups VALUES (500,501,'read_only', 'Simple replica');
LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK;
Easy to monitor
select b.weight, c.* from stats_mysql_connection_pool c left JOIN runtime_mysql_servers b ON
c.hostgroup=b.hostgroup_id and c.srv_host=b.hostname and c.srv_port = b.port where hostgroup in (500,501)
order by hostgroup,srv_host desc;
select * from mysql_server_read_only_log order by time_start_us desc limit 3;
+------------+------+------------------+-----------------+-----------+-------+
| hostname | port | time_start_us | success_time_us | read_only | error |
+------------+------+------------------+-----------------+-----------+-------+
| 10.0.0.52 | 3306 | 1569604127710729 | 895 | 1 | NULL |
| 10.0.0.51 | 3306 | 1569604127697866 | 542 | 1 | NULL |
| 10.0.0.50 | 3306 | 1569604127685005 | 795 | 0 | NULL |
+------------+------+------------------+-----------------+-----------+-------+
28
ProxySQL for MySQL Replication
28
ProxySQL for MySQL Replication
28
ProxySQL for MySQL Replication
28
ProxySQL for MySQL Replication
29
ProxySQL is cool because:
• Allow you to better distribute the load
• Reduce the service downtime shifting to available node(s)
• In case of writer down reads are served
• With Binlog reader allow u to identify node with stale data also in basic
replication
• More stuff that we are not covering here…
• Multiplexing; sharding; masking; firewalling etc..
In short your HA solution will be more resilient and flexible, which
means that your business will be safer.
Conclusions
30
31

Mais conteúdo relacionado

Mais procurados

Understanding Cassandra internals to solve real-world problems
Understanding Cassandra internals to solve real-world problemsUnderstanding Cassandra internals to solve real-world problems
Understanding Cassandra internals to solve real-world problemsAcunu
 
Large scale near real-time log indexing with Flume and SolrCloud
Large scale near real-time log indexing with Flume and SolrCloudLarge scale near real-time log indexing with Flume and SolrCloud
Large scale near real-time log indexing with Flume and SolrCloudDataWorks Summit
 
Apache Cassandra multi-datacenter essentials
Apache Cassandra multi-datacenter essentialsApache Cassandra multi-datacenter essentials
Apache Cassandra multi-datacenter essentialsJulien Anguenot
 
Cassandra multi-datacenter operations essentials
Cassandra multi-datacenter operations essentialsCassandra multi-datacenter operations essentials
Cassandra multi-datacenter operations essentialsJulien Anguenot
 
Cassandra Summit 2014: Performance Tuning Cassandra in AWS
Cassandra Summit 2014: Performance Tuning Cassandra in AWSCassandra Summit 2014: Performance Tuning Cassandra in AWS
Cassandra Summit 2014: Performance Tuning Cassandra in AWSDataStax Academy
 
HPTS talk on micro-sharding with Katta
HPTS talk on micro-sharding with KattaHPTS talk on micro-sharding with Katta
HPTS talk on micro-sharding with KattaTed Dunning
 
CockroachDB: Architecture of a Geo-Distributed SQL Database
CockroachDB: Architecture of a Geo-Distributed SQL DatabaseCockroachDB: Architecture of a Geo-Distributed SQL Database
CockroachDB: Architecture of a Geo-Distributed SQL DatabaseC4Media
 
stream-processing-at-linkedin-with-apache-samza
stream-processing-at-linkedin-with-apache-samzastream-processing-at-linkedin-with-apache-samza
stream-processing-at-linkedin-with-apache-samzaAbhishek Shivanna
 
Improving HDFS Availability with Hadoop RPC Quality of Service
Improving HDFS Availability with Hadoop RPC Quality of ServiceImproving HDFS Availability with Hadoop RPC Quality of Service
Improving HDFS Availability with Hadoop RPC Quality of ServiceMing Ma
 
Tales From The Front: An Architecture For Multi-Data Center Scalable Applicat...
Tales From The Front: An Architecture For Multi-Data Center Scalable Applicat...Tales From The Front: An Architecture For Multi-Data Center Scalable Applicat...
Tales From The Front: An Architecture For Multi-Data Center Scalable Applicat...DataStax Academy
 
Feb 2013 HUG: Large Scale Data Ingest Using Apache Flume
Feb 2013 HUG: Large Scale Data Ingest Using Apache FlumeFeb 2013 HUG: Large Scale Data Ingest Using Apache Flume
Feb 2013 HUG: Large Scale Data Ingest Using Apache FlumeYahoo Developer Network
 
Apache Cassandra Multi-Datacenter Essentials (Julien Anguenot, iLand Internet...
Apache Cassandra Multi-Datacenter Essentials (Julien Anguenot, iLand Internet...Apache Cassandra Multi-Datacenter Essentials (Julien Anguenot, iLand Internet...
Apache Cassandra Multi-Datacenter Essentials (Julien Anguenot, iLand Internet...DataStax
 
CCI2018 - Benchmarking in the cloud
CCI2018 - Benchmarking in the cloudCCI2018 - Benchmarking in the cloud
CCI2018 - Benchmarking in the cloudwalk2talk srl
 
DINR 2021 Virtual Workshop: Passive vs Active Measurements in the DNS
DINR 2021 Virtual Workshop: Passive vs Active Measurements in the DNSDINR 2021 Virtual Workshop: Passive vs Active Measurements in the DNS
DINR 2021 Virtual Workshop: Passive vs Active Measurements in the DNSAPNIC
 
Client Drivers and Cassandra, the Right Way
Client Drivers and Cassandra, the Right WayClient Drivers and Cassandra, the Right Way
Client Drivers and Cassandra, the Right WayDataStax Academy
 
The Hows and Whys of a Distributed SQL Database - Strange Loop 2017
The Hows and Whys of a Distributed SQL Database - Strange Loop 2017The Hows and Whys of a Distributed SQL Database - Strange Loop 2017
The Hows and Whys of a Distributed SQL Database - Strange Loop 2017Alex Robinson
 
Document Similarity with Cloud Computing
Document Similarity with Cloud ComputingDocument Similarity with Cloud Computing
Document Similarity with Cloud ComputingBryan Bende
 
Understanding Data Consistency in Apache Cassandra
Understanding Data Consistency in Apache CassandraUnderstanding Data Consistency in Apache Cassandra
Understanding Data Consistency in Apache CassandraDataStax
 

Mais procurados (20)

Understanding Cassandra internals to solve real-world problems
Understanding Cassandra internals to solve real-world problemsUnderstanding Cassandra internals to solve real-world problems
Understanding Cassandra internals to solve real-world problems
 
Large scale near real-time log indexing with Flume and SolrCloud
Large scale near real-time log indexing with Flume and SolrCloudLarge scale near real-time log indexing with Flume and SolrCloud
Large scale near real-time log indexing with Flume and SolrCloud
 
Apache Cassandra multi-datacenter essentials
Apache Cassandra multi-datacenter essentialsApache Cassandra multi-datacenter essentials
Apache Cassandra multi-datacenter essentials
 
Cassandra multi-datacenter operations essentials
Cassandra multi-datacenter operations essentialsCassandra multi-datacenter operations essentials
Cassandra multi-datacenter operations essentials
 
Cassandra Summit 2014: Performance Tuning Cassandra in AWS
Cassandra Summit 2014: Performance Tuning Cassandra in AWSCassandra Summit 2014: Performance Tuning Cassandra in AWS
Cassandra Summit 2014: Performance Tuning Cassandra in AWS
 
HPTS talk on micro-sharding with Katta
HPTS talk on micro-sharding with KattaHPTS talk on micro-sharding with Katta
HPTS talk on micro-sharding with Katta
 
CockroachDB: Architecture of a Geo-Distributed SQL Database
CockroachDB: Architecture of a Geo-Distributed SQL DatabaseCockroachDB: Architecture of a Geo-Distributed SQL Database
CockroachDB: Architecture of a Geo-Distributed SQL Database
 
stream-processing-at-linkedin-with-apache-samza
stream-processing-at-linkedin-with-apache-samzastream-processing-at-linkedin-with-apache-samza
stream-processing-at-linkedin-with-apache-samza
 
Improving HDFS Availability with Hadoop RPC Quality of Service
Improving HDFS Availability with Hadoop RPC Quality of ServiceImproving HDFS Availability with Hadoop RPC Quality of Service
Improving HDFS Availability with Hadoop RPC Quality of Service
 
Tales From The Front: An Architecture For Multi-Data Center Scalable Applicat...
Tales From The Front: An Architecture For Multi-Data Center Scalable Applicat...Tales From The Front: An Architecture For Multi-Data Center Scalable Applicat...
Tales From The Front: An Architecture For Multi-Data Center Scalable Applicat...
 
Feb 2013 HUG: Large Scale Data Ingest Using Apache Flume
Feb 2013 HUG: Large Scale Data Ingest Using Apache FlumeFeb 2013 HUG: Large Scale Data Ingest Using Apache Flume
Feb 2013 HUG: Large Scale Data Ingest Using Apache Flume
 
Apache Cassandra Multi-Datacenter Essentials (Julien Anguenot, iLand Internet...
Apache Cassandra Multi-Datacenter Essentials (Julien Anguenot, iLand Internet...Apache Cassandra Multi-Datacenter Essentials (Julien Anguenot, iLand Internet...
Apache Cassandra Multi-Datacenter Essentials (Julien Anguenot, iLand Internet...
 
CCI2018 - Benchmarking in the cloud
CCI2018 - Benchmarking in the cloudCCI2018 - Benchmarking in the cloud
CCI2018 - Benchmarking in the cloud
 
ACM 2013-02-25
ACM 2013-02-25ACM 2013-02-25
ACM 2013-02-25
 
DINR 2021 Virtual Workshop: Passive vs Active Measurements in the DNS
DINR 2021 Virtual Workshop: Passive vs Active Measurements in the DNSDINR 2021 Virtual Workshop: Passive vs Active Measurements in the DNS
DINR 2021 Virtual Workshop: Passive vs Active Measurements in the DNS
 
HDFS Selective Wire Encryption
HDFS Selective Wire EncryptionHDFS Selective Wire Encryption
HDFS Selective Wire Encryption
 
Client Drivers and Cassandra, the Right Way
Client Drivers and Cassandra, the Right WayClient Drivers and Cassandra, the Right Way
Client Drivers and Cassandra, the Right Way
 
The Hows and Whys of a Distributed SQL Database - Strange Loop 2017
The Hows and Whys of a Distributed SQL Database - Strange Loop 2017The Hows and Whys of a Distributed SQL Database - Strange Loop 2017
The Hows and Whys of a Distributed SQL Database - Strange Loop 2017
 
Document Similarity with Cloud Computing
Document Similarity with Cloud ComputingDocument Similarity with Cloud Computing
Document Similarity with Cloud Computing
 
Understanding Data Consistency in Apache Cassandra
Understanding Data Consistency in Apache CassandraUnderstanding Data Consistency in Apache Cassandra
Understanding Data Consistency in Apache Cassandra
 

Semelhante a Robust HA Solutions - Native Support for PXC and InnoDB Cluster in ProxySQL

MySQL Options in OpenStack
MySQL Options in OpenStackMySQL Options in OpenStack
MySQL Options in OpenStackTesora
 
OpenStack Days East -- MySQL Options in OpenStack
OpenStack Days East -- MySQL Options in OpenStackOpenStack Days East -- MySQL Options in OpenStack
OpenStack Days East -- MySQL Options in OpenStackMatt Lord
 
Buytaert kris my_sql-pacemaker
Buytaert kris my_sql-pacemakerBuytaert kris my_sql-pacemaker
Buytaert kris my_sql-pacemakerkuchinskaya
 
Navigating Transactions: ACID Complexity in Modern Databases
Navigating Transactions: ACID Complexity in Modern DatabasesNavigating Transactions: ACID Complexity in Modern Databases
Navigating Transactions: ACID Complexity in Modern DatabasesShivji Kumar Jha
 
Navigating Transactions: ACID Complexity in Modern Databases- Mydbops Open So...
Navigating Transactions: ACID Complexity in Modern Databases- Mydbops Open So...Navigating Transactions: ACID Complexity in Modern Databases- Mydbops Open So...
Navigating Transactions: ACID Complexity in Modern Databases- Mydbops Open So...Mydbops
 
The MySQL High Availability Landscape and where Galera Cluster fits in
The MySQL High Availability Landscape and where Galera Cluster fits inThe MySQL High Availability Landscape and where Galera Cluster fits in
The MySQL High Availability Landscape and where Galera Cluster fits inSakari Keskitalo
 
Webinar Slides: MySQL HA/DR/Geo-Scale - High Noon #2: Galera Cluster
Webinar Slides: MySQL HA/DR/Geo-Scale - High Noon #2: Galera ClusterWebinar Slides: MySQL HA/DR/Geo-Scale - High Noon #2: Galera Cluster
Webinar Slides: MySQL HA/DR/Geo-Scale - High Noon #2: Galera ClusterContinuent
 
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster Cloudera, Inc.
 
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster Cloudera, Inc.
 
SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...
SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...
SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...Lucidworks
 
Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640Newlink
 
Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640Newlink
 
Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640LLC NewLink
 
Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640Newlink
 
Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640Newlink
 
The Care + Feeding of a Mongodb Cluster
The Care + Feeding of a Mongodb ClusterThe Care + Feeding of a Mongodb Cluster
The Care + Feeding of a Mongodb ClusterChris Henry
 
Everything you always wanted to know about Distributed databases, at devoxx l...
Everything you always wanted to know about Distributed databases, at devoxx l...Everything you always wanted to know about Distributed databases, at devoxx l...
Everything you always wanted to know about Distributed databases, at devoxx l...javier ramirez
 
A Closer Look at Apache Kudu
A Closer Look at Apache KuduA Closer Look at Apache Kudu
A Closer Look at Apache KuduAndriy Zabavskyy
 

Semelhante a Robust HA Solutions - Native Support for PXC and InnoDB Cluster in ProxySQL (20)

MySQL Options in OpenStack
MySQL Options in OpenStackMySQL Options in OpenStack
MySQL Options in OpenStack
 
OpenStack Days East -- MySQL Options in OpenStack
OpenStack Days East -- MySQL Options in OpenStackOpenStack Days East -- MySQL Options in OpenStack
OpenStack Days East -- MySQL Options in OpenStack
 
Redis - Partitioning
Redis - PartitioningRedis - Partitioning
Redis - Partitioning
 
Buytaert kris my_sql-pacemaker
Buytaert kris my_sql-pacemakerBuytaert kris my_sql-pacemaker
Buytaert kris my_sql-pacemaker
 
Navigating Transactions: ACID Complexity in Modern Databases
Navigating Transactions: ACID Complexity in Modern DatabasesNavigating Transactions: ACID Complexity in Modern Databases
Navigating Transactions: ACID Complexity in Modern Databases
 
Navigating Transactions: ACID Complexity in Modern Databases- Mydbops Open So...
Navigating Transactions: ACID Complexity in Modern Databases- Mydbops Open So...Navigating Transactions: ACID Complexity in Modern Databases- Mydbops Open So...
Navigating Transactions: ACID Complexity in Modern Databases- Mydbops Open So...
 
The MySQL High Availability Landscape and where Galera Cluster fits in
The MySQL High Availability Landscape and where Galera Cluster fits inThe MySQL High Availability Landscape and where Galera Cluster fits in
The MySQL High Availability Landscape and where Galera Cluster fits in
 
Webinar Slides: MySQL HA/DR/Geo-Scale - High Noon #2: Galera Cluster
Webinar Slides: MySQL HA/DR/Geo-Scale - High Noon #2: Galera ClusterWebinar Slides: MySQL HA/DR/Geo-Scale - High Noon #2: Galera Cluster
Webinar Slides: MySQL HA/DR/Geo-Scale - High Noon #2: Galera Cluster
 
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster
 
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster
HBaseCon 2013: Using Coprocessors to Index Columns in an Elasticsearch Cluster
 
SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...
SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...
SolrCloud in Public Cloud: Scaling Compute Independently from Storage - Ilan ...
 
Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640
 
Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640
 
Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640
 
Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640
 
Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640Xen and-the-art-of-rails-deployment2640
Xen and-the-art-of-rails-deployment2640
 
Voldemort Nosql
Voldemort NosqlVoldemort Nosql
Voldemort Nosql
 
The Care + Feeding of a Mongodb Cluster
The Care + Feeding of a Mongodb ClusterThe Care + Feeding of a Mongodb Cluster
The Care + Feeding of a Mongodb Cluster
 
Everything you always wanted to know about Distributed databases, at devoxx l...
Everything you always wanted to know about Distributed databases, at devoxx l...Everything you always wanted to know about Distributed databases, at devoxx l...
Everything you always wanted to know about Distributed databases, at devoxx l...
 
A Closer Look at Apache Kudu
A Closer Look at Apache KuduA Closer Look at Apache Kudu
A Closer Look at Apache Kudu
 

Mais de Marco Tusa

Percona xtra db cluster(pxc) non blocking operations, what you need to know t...
Percona xtra db cluster(pxc) non blocking operations, what you need to know t...Percona xtra db cluster(pxc) non blocking operations, what you need to know t...
Percona xtra db cluster(pxc) non blocking operations, what you need to know t...Marco Tusa
 
My sql on kubernetes demystified
My sql on kubernetes demystifiedMy sql on kubernetes demystified
My sql on kubernetes demystifiedMarco Tusa
 
Accessing data through hibernate: what DBAs should tell to developers and vic...
Accessing data through hibernate: what DBAs should tell to developers and vic...Accessing data through hibernate: what DBAs should tell to developers and vic...
Accessing data through hibernate: what DBAs should tell to developers and vic...Marco Tusa
 
MySQL innoDB split and merge pages
MySQL innoDB split and merge pagesMySQL innoDB split and merge pages
MySQL innoDB split and merge pagesMarco Tusa
 
Fortify aws aurora_proxy_2019_pleu
Fortify aws aurora_proxy_2019_pleuFortify aws aurora_proxy_2019_pleu
Fortify aws aurora_proxy_2019_pleuMarco Tusa
 
Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...
Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...
Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...Marco Tusa
 
Are we there Yet?? (The long journey of Migrating from close source to opens...
Are we there Yet?? (The long journey of Migrating from close source to opens...Are we there Yet?? (The long journey of Migrating from close source to opens...
Are we there Yet?? (The long journey of Migrating from close source to opens...Marco Tusa
 
Improve aws withproxysql
Improve aws withproxysqlImprove aws withproxysql
Improve aws withproxysqlMarco Tusa
 
Fortify aws aurora_proxy
Fortify aws aurora_proxyFortify aws aurora_proxy
Fortify aws aurora_proxyMarco Tusa
 
Mysql8 advance tuning with resource group
Mysql8 advance tuning with resource groupMysql8 advance tuning with resource group
Mysql8 advance tuning with resource groupMarco Tusa
 
Proxysql sharding
Proxysql shardingProxysql sharding
Proxysql shardingMarco Tusa
 
Geographically dispersed perconaxtra db cluster deployment
Geographically dispersed perconaxtra db cluster deploymentGeographically dispersed perconaxtra db cluster deployment
Geographically dispersed perconaxtra db cluster deploymentMarco Tusa
 
Sync rep aurora_2016
Sync rep aurora_2016Sync rep aurora_2016
Sync rep aurora_2016Marco Tusa
 
Proxysql ha plam_2016_2_keynote
Proxysql ha plam_2016_2_keynoteProxysql ha plam_2016_2_keynote
Proxysql ha plam_2016_2_keynoteMarco Tusa
 
Empower my sql server administration with 5.7 instruments
Empower my sql server administration with 5.7 instrumentsEmpower my sql server administration with 5.7 instruments
Empower my sql server administration with 5.7 instrumentsMarco Tusa
 
Galera explained 3
Galera explained 3Galera explained 3
Galera explained 3Marco Tusa
 
Plmce 14 be a_hero_16x9_final
Plmce 14 be a_hero_16x9_finalPlmce 14 be a_hero_16x9_final
Plmce 14 be a_hero_16x9_finalMarco Tusa
 
Scaling with sync_replication using Galera and EC2
Scaling with sync_replication using Galera and EC2Scaling with sync_replication using Galera and EC2
Scaling with sync_replication using Galera and EC2Marco Tusa
 
Discard inport exchange table & tablespace
Discard inport exchange table & tablespaceDiscard inport exchange table & tablespace
Discard inport exchange table & tablespaceMarco Tusa
 
MySQL cluster 72 in the Cloud
MySQL cluster 72 in the CloudMySQL cluster 72 in the Cloud
MySQL cluster 72 in the CloudMarco Tusa
 

Mais de Marco Tusa (20)

Percona xtra db cluster(pxc) non blocking operations, what you need to know t...
Percona xtra db cluster(pxc) non blocking operations, what you need to know t...Percona xtra db cluster(pxc) non blocking operations, what you need to know t...
Percona xtra db cluster(pxc) non blocking operations, what you need to know t...
 
My sql on kubernetes demystified
My sql on kubernetes demystifiedMy sql on kubernetes demystified
My sql on kubernetes demystified
 
Accessing data through hibernate: what DBAs should tell to developers and vic...
Accessing data through hibernate: what DBAs should tell to developers and vic...Accessing data through hibernate: what DBAs should tell to developers and vic...
Accessing data through hibernate: what DBAs should tell to developers and vic...
 
MySQL innoDB split and merge pages
MySQL innoDB split and merge pagesMySQL innoDB split and merge pages
MySQL innoDB split and merge pages
 
Fortify aws aurora_proxy_2019_pleu
Fortify aws aurora_proxy_2019_pleuFortify aws aurora_proxy_2019_pleu
Fortify aws aurora_proxy_2019_pleu
 
Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...
Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...
Accessing Data Through Hibernate; What DBAs Should Tell Developers and Vice V...
 
Are we there Yet?? (The long journey of Migrating from close source to opens...
Are we there Yet?? (The long journey of Migrating from close source to opens...Are we there Yet?? (The long journey of Migrating from close source to opens...
Are we there Yet?? (The long journey of Migrating from close source to opens...
 
Improve aws withproxysql
Improve aws withproxysqlImprove aws withproxysql
Improve aws withproxysql
 
Fortify aws aurora_proxy
Fortify aws aurora_proxyFortify aws aurora_proxy
Fortify aws aurora_proxy
 
Mysql8 advance tuning with resource group
Mysql8 advance tuning with resource groupMysql8 advance tuning with resource group
Mysql8 advance tuning with resource group
 
Proxysql sharding
Proxysql shardingProxysql sharding
Proxysql sharding
 
Geographically dispersed perconaxtra db cluster deployment
Geographically dispersed perconaxtra db cluster deploymentGeographically dispersed perconaxtra db cluster deployment
Geographically dispersed perconaxtra db cluster deployment
 
Sync rep aurora_2016
Sync rep aurora_2016Sync rep aurora_2016
Sync rep aurora_2016
 
Proxysql ha plam_2016_2_keynote
Proxysql ha plam_2016_2_keynoteProxysql ha plam_2016_2_keynote
Proxysql ha plam_2016_2_keynote
 
Empower my sql server administration with 5.7 instruments
Empower my sql server administration with 5.7 instrumentsEmpower my sql server administration with 5.7 instruments
Empower my sql server administration with 5.7 instruments
 
Galera explained 3
Galera explained 3Galera explained 3
Galera explained 3
 
Plmce 14 be a_hero_16x9_final
Plmce 14 be a_hero_16x9_finalPlmce 14 be a_hero_16x9_final
Plmce 14 be a_hero_16x9_final
 
Scaling with sync_replication using Galera and EC2
Scaling with sync_replication using Galera and EC2Scaling with sync_replication using Galera and EC2
Scaling with sync_replication using Galera and EC2
 
Discard inport exchange table & tablespace
Discard inport exchange table & tablespaceDiscard inport exchange table & tablespace
Discard inport exchange table & tablespace
 
MySQL cluster 72 in the Cloud
MySQL cluster 72 in the CloudMySQL cluster 72 in the Cloud
MySQL cluster 72 in the Cloud
 

Último

Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfPredicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfBoston Institute of Analytics
 
Call Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceCall Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceSapana Sha
 
Heart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis ProjectHeart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis ProjectBoston Institute of Analytics
 
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝DelhiRS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhijennyeacort
 
20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdfHuman37
 
How we prevented account sharing with MFA
How we prevented account sharing with MFAHow we prevented account sharing with MFA
How we prevented account sharing with MFAAndrei Kaleshka
 
Advanced Machine Learning for Business Professionals
Advanced Machine Learning for Business ProfessionalsAdvanced Machine Learning for Business Professionals
Advanced Machine Learning for Business ProfessionalsVICTOR MAESTRE RAMIREZ
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfSocial Samosa
 
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPramod Kumar Srivastava
 
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一fhwihughh
 
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样vhwb25kk
 
GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]📊 Markus Baersch
 
RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.natarajan8993
 
MK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docxMK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docxUnduhUnggah1
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...soniya singh
 
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /WhatsappsBeautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsappssapnasaifi408
 
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default  Presentation : Data Analysis Project PPTPredictive Analysis for Loan Default  Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPTBoston Institute of Analytics
 
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptdokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptSonatrach
 

Último (20)

Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdfPredicting Salary Using Data Science: A Comprehensive Analysis.pdf
Predicting Salary Using Data Science: A Comprehensive Analysis.pdf
 
Call Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceCall Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts Service
 
Heart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis ProjectHeart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis Project
 
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝DelhiRS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
 
20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf
 
How we prevented account sharing with MFA
How we prevented account sharing with MFAHow we prevented account sharing with MFA
How we prevented account sharing with MFA
 
Advanced Machine Learning for Business Professionals
Advanced Machine Learning for Business ProfessionalsAdvanced Machine Learning for Business Professionals
Advanced Machine Learning for Business Professionals
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
 
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
 
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
 
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
 
E-Commerce Order PredictionShraddha Kamble.pptx
E-Commerce Order PredictionShraddha Kamble.pptxE-Commerce Order PredictionShraddha Kamble.pptx
E-Commerce Order PredictionShraddha Kamble.pptx
 
GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]
 
RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.
 
MK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docxMK KOMUNIKASI DATA (TI)komdat komdat.docx
MK KOMUNIKASI DATA (TI)komdat komdat.docx
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
 
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /WhatsappsBeautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
 
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default  Presentation : Data Analysis Project PPTPredictive Analysis for Loan Default  Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
 
Call Girls in Saket 99530🔝 56974 Escort Service
Call Girls in Saket 99530🔝 56974 Escort ServiceCall Girls in Saket 99530🔝 56974 Escort Service
Call Girls in Saket 99530🔝 56974 Escort Service
 
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptdokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
 

Robust HA Solutions - Native Support for PXC and InnoDB Cluster in ProxySQL

  • 1. 1 Robust HA Solutions - Native Support for PXC and InnoDB Cluster in ProxySQL Marco Tusa Percona
  • 2. 2 2 • Open source enthusiast • Principal architect • Working in DB world over 25 years • Open source developer and community contributor About Me
  • 3. 3 What this talk is about High level description of High Availability HA solutions Misses from current vanilla solutions Where ProxySQL fits It is NOT about Performance It is NOT an in depth description of HA or ProxySQL feature It is NOT an implementation guide
  • 4. 4 • The need and dimension of HA or DR is related to the real need of your business. • We are pathologically online/connected, and often we expect to have over dimensioned HA or DR. • Business needs often do not require all the components to be ALWAYS available. How We Need to think about HA and DR
  • 5. 5 Why We Need HA and DR Do: • Business needs • Technical challenges • Supportable solutions • Knowhow Don’t: • Choose based on the “shiny object” • Pick something you know nothing about • Choose by yourself and push it up or down • Use shortcuts, to accelerate deploying time. The first step to have a robust solution is to design the right solution for your business.
  • 6. 6 • High Availability is to cover service availability in a location • 10 Gb Ethernet Best case scenario
 400 mt distance Max HA vs DR
  • 7. 7 • Disaster Recovery is to assure we can restore service, in a geographically different location • Real speed may vary • Linear distance ~1000Km HA vs DR
  • 8. 8 HA vs DR DO NOT MIX the solutions in your architecture http://www.tusacentral.com/joomla/index.php/mysql-blogs/204-how-not-to-do-mysql-high-availability-geographic-node-distribution-with-galera-based-replication-misuse http://www.tusacentral.com/joomla/index.php/mysql-blogs/205-mysql-high-availability-on-premises-a-geographically-distributed-scenario
  • 9. 9 Replicate data is the key - Sync VS Async 1
 Data state 3 Different Data state
  • 10. 10 Data Replication is the Base Tightly coupled database clusters • Datacentric approach (single state of the data, distributed commit) • Data is consistent in time cross nodes • Replication requires high performant link • Geographic distribution is forbidden • DR is not supported Loosely coupled database clusters • Single node approach (local commit) • Data state differs by node • Single node state does not affect the cluster • Replication link doesn’t need to be high performance • Geographic distribution is allow • DR is supported
  • 11. 11 In MySQL ecosystem we have different solutions • NDB • Galera replication • InnoDB group replication • Basic Replication Different solutions
  • 13. 13 Good thing: • Very High availability (five 9s) • Write scale (for real not a fake) • Read scale • Data is in sync all the time Bad things: • Too be very efficient should
 stay in memory as much as possible • Complex to manage • Require deep understanding also from application point of view Understand solutions: NDB
  • 14. 14 Replication by internal design Good thing: • Highly available • Read scale • Data is in almost in sync
 all the time Bad things: • Doesn’t Scale writes • More nodes more internal
 overhead • Network is a very impacting factor • 1 Master only (PLEASE!!!!!) Understand solutions: Galera
  • 15. 15 Understand solutions: Group replication Replication by standard MySQL Good thing: • Highly available • Read scale • Data is in almost in sync
 all the time • More network tolerant Bad things: • Doesn’t Scale writes • 1 Master only (PLEASE!!!!!) When something ends to be a new things?
  • 16. 16 Understand solutions: Replication Replication by internal design Good thing: • Each node is independent • Read scale • Low network impact Bad things: • Stale reads • Low HA • Consistent only in the master • Each node has its own data state
  • 17. 17 1. No embedded R/W split • Writes and reads are bound to 1 node • Failing read channel will cause total failure 2. No recognition of possible stale reads • In case of replication delay, application can get wrong data 3. No node failure identification and no single entry point • Architecture must implement a double mechanism to check nodes status (topology manager) and entry point like VIP • Application must use a connector that allow multiple entry point in case of issue. (like Java connector) What they have in common?
  • 18. 18 1. Embedded R/W split by group of servers (Host Group) • Writes and reads are automatically redirect to working node(s) 2. Identify stale node with the binary log reader • In case of replication delay, delayed node will not receive the read requests 3. Embedded node failure, automatic redirection • Proxy provide native status recognition for Galera and Group replication • Only when using basic replication, you need a topology manager • Application connect to ProxySQL no need to add VIP What ProxySQL can do to be better
  • 19. 19 Native health checks read_only wsrep_local_recv_queue wsrep_desync wsrep_reject_queries wsrep_sst_donor_rejects_queries primary_partition ProxySQL for Galera • writer_hostgroup: the hostgroup ID that refers to the WRITER • backup_writer_hostgroup: the hostgoup ID referring to the Hostgorup that will contain the candidate servers • reader_hostgroup: The reader Hostgroup ID, containing the list of servers that need to be taken in consideration • offline_hostgroup: The Hostgroup ID that will eventually contain the writer that will be put OFFLINE • active: True[1]/False[0] if this configuration needs to be used or not • max_writers: This will contain the MAX number of writers you want to have at the same time. In a sane setup this should be always 1, but if you want to have multiple writers, you can define it up to the number of nodes. • writer_is_also_reader: If true [1] the Writer will NOT be removed from the reader HG • max_transactions_behind: The number of wsrep_local_recv_queue after which the node will be set OFFLINE. This must be carefully set, observing the node behaviour. • comment: I suggest to put some meaningful notes to identify what is what.
  • 20. 20 I have these nodes 192.168.1.205 (Node1) 192.168.1.21 (Node2) 192.168.1.231 (node3) ProxySQL for Galera Will use these groups: Writer HG-> 100 Reader HG-> 101 BackupW HG-> 102 offHG HG-> 9101 INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.205’,100,3306,1000); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.205',101,3306,1000); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.21',101,3306,1000); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight) VALUES ('192.168.1.231',101,3306,1000); insert into mysql_galera_hostgroups (writer_hostgroup,backup_writer_hostgroup,reader_hostgroup, offline_hostgroup,active,max_writers,writer_is_also_reader,max_transactions_behind) values (100,102,101,9101,1,1,1,16);
  • 24. 22 ProxySQL for Group Replication Native health checks Read_only Viable_candidate Transaction_behind • writer_hostgroup: the hostgroup ID that refers to the WRITER • backup_writer_hostgroup: the hostgoup ID referring to the Hostgorup that will contain the candidate servers • reader_hostgroup: The reader Hostgroup ID, containing the list of servers that need to be taken in consideration • offline_hostgroup: The Hostgroup ID that will eventually contain the writer that will be put OFFLINE • active: True[1]/False[0] if this configuration needs to be used or not • max_writers: This will contain the MAX number of writers you want to have at the same time. In a sane setup this should be always 1, but if you want to have multiple writers, you can define it up to the number of nodes. • writer_is_also_reader: If true [1] the Writer will NOT be removed from the reader HG • max_transactions_behind: determines the maximum number of transactions behind the writers that ProxySQL should allow before shunning the node to prevent stale reads (this is determined by querying the transactions_behind field of the sys.gr_member_routing_candidate_status table in MySQL). • comment: I suggest to put some meaningful notes to identify what is what.
  • 25. 23 I have these nodes 192.168.4.55 (Node1) 192.168.4.56 (Node2) 192.168.4.57 (node3) ProxySQL for Group Replication Will use these groups: Writer HG-> 400 Reader HG-> 401 BackupW HG-> 402 offHG HG-> 9401 INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.55',400,3306,10000,2000,'GR1'); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.55',401,3306,100,2000,'GR1'); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.56',401,3306,10000,2000,'GR2'); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('192.168.4.57',401,3306,10000,2000,'GR2'); insert into mysql_group_replication_hostgroups (writer_hostgroup,backup_writer_hostgroup,reader_hostgroup, offline_hostgroup,active,max_writers,writer_is_also_reader,max_transactions_behind) values (400,402,401,9401,1,1,1,100); select * from mysql_server_group_replication_log order by 3 desc,1 limit 3 ; +--------------+------+------------------+-----------------+------------------+-----------+---------------------+-------+ | hostname | port | time_start_us | success_time_us | viable_candidate | read_only | transactions_behind | error | +--------------+------+------------------+-----------------+------------------+-----------+---------------------+-------+ | 192.168.4.57 | 3306 | 1569593421324355 | 3085 | YES | YES | 0 | NULL | | 192.168.4.56 | 3306 | 1569593421321825 | 2889 | YES | YES | 0 | NULL | | 192.168.4.55 | 3306 | 1569593421321435 | 2764 | YES | NO | 0 | NULL | +--------------+------+------------------+-----------------+------------------+-----------+---------------------+-------+
  • 26. 24 ProxySQL for Group Replication
  • 27. 24 ProxySQL for Group Replication
  • 28. 24 ProxySQL for Group Replication
  • 29. 24 ProxySQL for Group Replication
  • 30. 25 Native health checks MySQL ping read_only Latency ProxySQL for NDB Simpler to configure, no replication lag either ON or OFF delete from mysql_servers where hostgroup_id in (300,301); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('10.0.0.107',300,3306,10000,2000,'DC1 writer'); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('10.0.0.107',301,3306,10000,2000,'DC1 Reader'); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('10.0.0.108',300,3306,10000,2000,'DC1 writer'); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('10.0.0.108',301,3306,10000,2000,'DC1 Reader'); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('10.0.0.109',300,3306,10000,2000,'DC1 writer'); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('10.0.0.109',301,3306,10000,2000,'DC1 Reader'); INSERT INTO mysql_replication_hostgroups VALUES (300,301,'read_only','NDB_cluster'); LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK; Easy to monitor select b.weight, c.* from stats_mysql_connection_pool c left JOIN runtime_mysql_servers b ON c.hostgroup=b.hostgroup_id and c.srv_host=b.hostname and c.srv_port = b.port where hostgroup in (300,301) order by hostgroup,srv_host desc; +--------+-----------+------------+----------+--------+...+------------+ | weight | hostgroup | srv_host | srv_port | status |...| Latency_us | +--------+-----------+------------+----------+--------+...+------------+ | 10000 | 300 | 10.0.0.109 | 3306 | ONLINE |...| 561 | | 10000 | 300 | 10.0.0.108 | 3306 | ONLINE |...| 494 | | 10000 | 300 | 10.0.0.107 | 3306 | ONLINE |...| 457 | | 10000 | 301 | 10.0.0.109 | 3306 | ONLINE |...| 561 | | 10000 | 301 | 10.0.0.108 | 3306 | ONLINE |...| 494 | | 10000 | 301 | 10.0.0.107 | 3306 | ONLINE |...| 457 | +--------+-----------+------------+----------+--------+...+------------+
  • 34. 27 ProxySQL for MySQL Replication Native health checks MySQL ping read_only Latency Stale READS with GTID delete from mysql_servers where hostgroup_id in (500,501); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('10.0.0.50',500,3306,10000,2000,'DC1 writer'); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('10.0.0.50',501,3306,10000,2000,'DC1 Reader'); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('10.0.0.51',501,3306,10000,2000,'DC1 Reader'); INSERT INTO mysql_servers (hostname,hostgroup_id,port,weight,max_connections,comment) VALUES ('10.0.0.52',501,3306,10000,2000,'DC1 Reader'); INSERT INTO mysql_replication_hostgroups VALUES (500,501,'read_only', 'Simple replica'); LOAD MYSQL SERVERS TO RUNTIME; SAVE MYSQL SERVERS TO DISK; Easy to monitor select b.weight, c.* from stats_mysql_connection_pool c left JOIN runtime_mysql_servers b ON c.hostgroup=b.hostgroup_id and c.srv_host=b.hostname and c.srv_port = b.port where hostgroup in (500,501) order by hostgroup,srv_host desc; select * from mysql_server_read_only_log order by time_start_us desc limit 3; +------------+------+------------------+-----------------+-----------+-------+ | hostname | port | time_start_us | success_time_us | read_only | error | +------------+------+------------------+-----------------+-----------+-------+ | 10.0.0.52 | 3306 | 1569604127710729 | 895 | 1 | NULL | | 10.0.0.51 | 3306 | 1569604127697866 | 542 | 1 | NULL | | 10.0.0.50 | 3306 | 1569604127685005 | 795 | 0 | NULL | +------------+------+------------------+-----------------+-----------+-------+
  • 35. 28 ProxySQL for MySQL Replication
  • 36. 28 ProxySQL for MySQL Replication
  • 37. 28 ProxySQL for MySQL Replication
  • 38. 28 ProxySQL for MySQL Replication
  • 39. 29 ProxySQL is cool because: • Allow you to better distribute the load • Reduce the service downtime shifting to available node(s) • In case of writer down reads are served • With Binlog reader allow u to identify node with stale data also in basic replication • More stuff that we are not covering here… • Multiplexing; sharding; masking; firewalling etc.. In short your HA solution will be more resilient and flexible, which means that your business will be safer. Conclusions
  • 40. 30
  • 41. 31