2. What is DRBD?
• DRBD is a block device designed as a building
block to form HA clusters.
• This is done by mirroring a whole block device
via an assigned network.
• DRBD can be understand as network based
RAID1.
• T uses DRBD-8.2, S uses DRBD-8.4
(may change in the future).
3. Block device (Kernel component)
File system
Buffer cache
Block device
Disk sched
Disk driver
4. DRBD sends I/O to the other node
File system
Buffer cache
DRBD
Disk sched
Disk driver
WRITE ops are sent
to secondary over
network
6. How to set up DRBD
• Prepare DRBD partitions
• Create setup files
/etc/drbd.conf (DRBD-8.2)
/etc/drbd.d/global_common.conf (DRBD-8.4)
/etc/drbd.d/r0.res,r1.res (DRBD-8.4)
• Start DRBD sync
7. DRBD settings
• In DRBD-8.2,
all the settings are in /etc/drbd.conf
• In DRBD-8.4,
global settings in /etc/drbd.d/global_common.conf
resource level settings in /etc/drbd.d/r<N>.res
• Sample:
http://www.drbd.org/users-guide/re-drbdconf.html
• HA1 and HA2 have the identical DRBD config files
• Usage-count (always no)
• Protocol (C WRITE completes when reached the other node as
well)
• Sync rate (100MB/sec for sync no need for 10Gb NIC)
• Partition name (device minor # for /dev/drbdN)
• Node name / IP address / port number
9. Sample drbd.conf (2)
• resource r0 {
• protocol C;
• on Machine-HA1 { (must match what “uname –n” says on HA1)
• device /dev/drbd1;
• disk /dev/disk/by-label/XX;
• address 10.0.128.17:7788;
• }
• on Machine-HA2 { (must match what “uname –n” says on HA2)
• device /dev/drbd1;
• disk /dev/disk/by-label/XX;
• address 10.0.128.18:7788;
• }
• }
• [root@Machine-HA2 ~]# uname -n
• Machine-HA2
• [root@Machine-HA2 ~]#
10. Resource and Role
• In DRBD, every resource (partition) has a role,
which may be primary or secondary.
• A primary DRBD device can be used for any
read/write operations.
• A DRBD secondary device can NOT be used for
any read/write operations.
• Secondary only receives WRITEs from primary.
16. What causes DRBD problems
There are 3 types of problems.
1. Network error (bond1)
Outdated
2. Disk error (disk error or filesystem error)
Diskless
3. Role change without sync
(typically caused by multiple host reboots)
Inconsistent
17. 1. Network problem
• When bond1 stops working between HA1 and
HA2, DRBD devices on standby node becomes
Outdated
How to fix?
• Fix the network issue at first.
• Then DRBD will fix automatically.
• Without heartbeat, you may need manual
intervention.
19. Bond1 stopped (ifdown bond1)
CS (connection Status) becomes WFConnection (Waiting For Connection).
ST (Status) becomes Unknown on peer side.
DS (Disk Status) becomes Outdated on secondary devices.
20. How to fix
• Find where the problem is. It can be bond1 on
HA1 or bond1 on HA2, or the network cable.
• Fix the network issue.
• Then the DRBD problem will be fixed
automatically.
• If heartbeat is NOT running, DRBD may not be
fixed automatically.
21. Disk I/O error on secondary
• DRBD device will be Detached automatically
upon disk error.
• drbd.conf
Resource r0 {
disk {
on-io-error detach;
}
}
22. Disk I/O error on secondary
• Upon disk error, drbdadm detach <res> will run.
Secondary devices become Diskless state. After fixing the disk issue,
You need to attach drbdadm attach all
If the internal data on the disk is broken, sync will run from UpToDate
device to the peer.
23. • Fix the disk issue at first.
• Then run drbdadm attach all
• Sync may run.
Disk I/O error on secondary
24. Disk I/O error on primary
• If disk I/O error happened on primary, Primary
DRBD devices become Diskless.
25. Disk I/O error on primary
• Fix the disk issue at first. Then run
drbdadm attach all on the bad node.
• Sync will run from UpToDate (secondary) to
Inconsistent (Primary).
26. • Attach/Detach
attaches/detaches lower disks
• Connect/Disconnect
connect-to/disconnect-from peer node
• Primary/Secondary
define the role of resource
• Invalidate
invalidate the data
• Pre-DRBD-8.4
drbdadm -- --discard-my-data connect <res>
DRBD-8.4
drbdadm connect --discard-my-data <res>
discard data on the resource
27. How to check if split-brain happens
• Once SB happens, you see
Split-Brain detected, dropping connection!
In /var/log/messages
• When SB happens, at least one node becomes
StandAlone. The peer can be WFConnection
or StandAlone too.
• If SB happens, you need to discard data on
one node.
28. Sample plan to fix SB (1)
1. Take hostbackup
2. Identify the bad host
3. Identify which are primary and secondary
(DRBD)
4. Stop DB
service heartbeat stop (HA1/HA2)
make sure DRBD partitions are not mounted
29. Sample plan to fix SB (2)
• drbdadm disconnect all (HA1 / HA2)
• drbdadm secondary all (HA1 / HA2)
• drbdadm disconnect all (HA1 / HA2)
• drbdadm -- --discard-my-data connect all
(only on bad host)
• drbdadm connect all (good host)
• drbdadm connect all (bad host)
30. Sample plan to fix SB (3)
5. Start heartbeat on the good host to make it
Primary.