IAC 2024 - IA Fast Track to Search Focused AI Solutions
High Availability with MySQL
1. HIGH AVAILABILITY WITH
MYSQL
THAVA ALAGU
Staff Engineer, Database Group,
Sun Microsystems, Bangalore
thavamuni.alagu@sun.com
http://blogs.sun.com/thava/
Oct 2008
5. HIGH AVAILABILITY
Data Must Not be Lost
Data + Application Must be
Available Always
5
6. MYSQL SOLUTIONS …
MySQL Replication (Asynchronous)
MySQL Cluster (Synchronous)
Supporting Solutions:
MySQL Proxy (Alpha)
MySQL Load Balancer (Based on Proxy)
3rd Party Solutions from Partners and Others
6
7. MYSQL REPLICATION HIGHLIGHTS
Binary Log Updates From Master Applied at
Slave
Usually One Master Multiple Slaves
Asynchronous Copy
Slave could be used as Read-Only
Geographically Distributed, Slow Links, OK
Can be chained –
Master Slave/Master Slave
7
8. MYSQL REPLICATION BENEFITS
Hot Standby Node for Fail-Over
Can be Used for transparent Backups
Load Balancing – Slaves Can be Active for Read
Automated copy across geographically
distributed sites
8
11. REPLICATION NOTES
Binary logs rotated every 1GB
You should clean/purge old logs
Master/Slave similar architecture
11
12. REPLICATION FORMATS
3 Types:
Statement Based Replication
Row-based Replication
Mixed Based Replication
Depends on MySQL Version, Storage Engine
Can override the defaults
e.g. use row-format=row option
Usually row based replication is safer
But exceptions exist
12
13. REPLICATION FOR BACKUPS
Slave can be shutdown for Cold Backup
Logical or File System backups
Backup Logs too
Logs can be used for Point-In-Time Recovery
13
14. REPLICATE ACROSS STORAGE
ENGINES
Slave can be ARCHIVE Engine
Insurance against SE specific failures
Faster Slave with No Transaction Engine
e.g. InnoDB to MyISAM
How:
Use storage_engine system variable
Mysqldump; edit; reload
Stop slave; Alter table …; Start Slave
Disable some storage engines at Slave
Create new tables with default engine
14
15. REPLICATE FOR SCALING
Use Multiple Slaves For Read Load
Dedicate Master for Write Load
Perfect for Load Balancing
15
20. REPLICATE USING SSL
For High Security
Uses Certificate-Authority Certificate,
Server Public Key, Server Private Key
20
21. REPLICATION SETUP UPGRADE
Upgrade Slaves First, Then Master
Shutdown, Upgrade, Restart
Watch out Versions Upgradability
21
22. REPLICATION COMMANDS
On Master:
Show master status
Flush logs
Show master logs
Show binlog events
Purge master logs to ‘mysql-bin.005’;
On Slave:
Change Master To master_host=‘hostA’ master_user=‘slave’
master_password=‘slavepassword’ ;
Show slave status
Load Data From Master ; # MyISAM only. Locks Tables on master
22
23. REPLICATION FAILOVER
Does not happen automatically
Many solutions exist like virtual IP, Sun
Cluster, Linux-HA Heartbeat, etc
Application should retry
23
24. MYSQL CLUSTER HIGHLIGHTS
Synchronous
Ultra High Availability
Think Telecom, Banking
Memory (mostly) and Disk based
No Single Point Of Failure
Automatic Failover
24
28. MYSQL CLUSTER NOTES
ACID transactions
Supports READ COMMITTED only
Synchronous replication uses 2 Phase Commit
Local and Global Checkpoints to disk
Data access through SQL or NDB API
Online Backups supported
All nodes same architecture – big endian or little endian
28
29. MYSQL CLUSTER NOTES (CONTD)
Max number of data nodes = 48;
Max(data nodes + management nodes )=63
Max total number of all = 256 (including SQL nodes)
Tables highly recommended to have primary keys for
cluster replication.
29
30. MYSQL CLUSTER CONFIGURATION
NDB config File = config.ini
Number Of Replicas : 1 upto 4 [ usually 2 ]
Memory Sizing
Per Node Memory = (DataSize * Replicas * 1.1) / Nodes
Default Port number for Mgmt Node = 2202
Default Port Number for Data Node = 1186
SQL Node Config File = my.cnf
[mysqld]
ndbcluster
ndb-connectstring=mgmt-hostname
30
31. NODE FAILURE DETECTION
Uses Heartbeats
Failed Node Excluded
Node taken over by another in same node group
Failed node tries to repair itself and then joins
cluster
31
32. NETWORK PARTITIONING
Also Called Split Brain condition
2 Sets of Nodes Diverges with writes
Management node acts as arbitrator by default
Nodes with maximum votes survive
32
33. CLUSTER CHECKPOINTS
Transactions Commits in memory
Local checkpoint writes all node data and UNDO
log written to disk; old REDO logs can be deleted
Global Checkpoint: Frequency controlled by
TimeBetweenGlobalCheckpoints (default 2000
millisecs)
33
35. CLUSTER ROLLING
UPGRADE/DOWNGRADE
Online Upgrae/Downgrade supported.
Shutdown node, replace binary, bring it up
One node at a time in this order:
Management Node(s)
Data Nodes
SQL Nodes
Watch out for upgradability/downgradbility of
versions
35
36. REPLICATING CLUSTERS
Note: Row-based logging must be enabled for this to work
36
37. ANOTHER HA SOLUTION:
DISTRIBUTED REPLICATED BLOCK
DEVICE
Available on Linux. Provides synchronous data copy.
Uses virtual block device
Replicated from primary server to secondary
Implemented with kernel level and user level software
Often used with Heartbeat Linux Cluster manager
Secondary server is passive – Not for load balancing
Like RAID but software based
37
38. RESOURCES …
Developer Zone: Developer Articles, etc.
http://dev.mysql.com
MySQL White Papers
http://www.mysql.com/why-mysql/white-papers/
MySQL Forums:
http://forums.mysql.com
Planet MySQL – Blog aggregator
http://planetmysql.org
38