1. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 1
ORACLE CLUSTER INSTALLTION WITH GRID, KEEPALIVE
& NFS HIGH AVAILABILITY – 12C RAC
2. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 2
SETTING UP PRE-REQUIREMENTS
Date:
date -s "9 AUG 2013 11:32:08"
SETTING UP EPEL REPOSITORY ON ALL THE SERVERS
yum install http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm -y
INSTALLING ORACLE ASMLIB PACKAGE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
cd /etc/yum.repos.d ; wget https://public-yum.oracle.com/public-yum-ol6.repo --no-check-certificate
wget http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
yum install kernel-uek-devel* kernel-devel oracleasm oracleasm-support elfutils-libelf-devel kmod-oracleasm
oracleasmlib tcpdump htop -y
yum install oracleasmlib-2.0.12-1.el6.x86_64.rpm
INSTALLING ORACLE GRID AND DATABASE PRE-REQUIREMENTS ON BOTH NODES/RACS -
(192.168.0.139 & 192.168.0.140)
yum install binutils-2.* elfutils-libelf-0.* glibc-2.* glibc-common-2.* ksh-2* libaio-0.* libgcc-4.* libstdc++-4.*
make-3.* elfutils-libelf-devel-* gcc-4.* gcc-c++-4.* glibc-devel-2.* glibc-headers-2.* libstdc++-devel-4.*
unixODBC-2.* compat-libstdc++-33* libaio-devel-0.* unixODBC-devel-2.* sysstat-7.* -y
INSTALLING BIND PRE-REQUIREMENTS ON DNS SERVER - (192.168.0.110)
yum -y install bind bind-utils
INSTALLING NFS SERVER PRE-REQUIREMENTS (10.75.40.31 & 10.75.40.32)
yum -y install nfs-utils
3. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 3
TO OVERCOME ORA-00845: MEMORY_TARGET NOT SUPPORTED ON BOTH NODES/RACS -
(192.168.0.139 & 192.168.0.140)
SQL> startup nomount;
ORA-00845: MEMORY_TARGET not supported on this system
This error comes up because you tried to use the Automatic Memory Management (AMM) feature of Oracle 12C.
It seems that your shared memory filesystem (shmfs) is not big enough and enlarge your shared memory filesystem
to avoid the error above.
First of all, login as root and have a look at the filesystem:
df -hT
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_oracleem-lv_root
93G 19G 69G 22% /
tmpfs 5.9G 112K 5.9G 1% /dev/shm
/dev/sda1 485M 99M 362M 22% /boot
We can see that tmpfs has a size of 6GB. We can change the size of that filesystem by issuing the following
command (where “12g” is the size I want for my MEMORY_TARGET):
mount -t tmpfs shmfs -o size=12g /dev/shm
The shared memory file system should be big enough to accommodate the MEMORY_TARGET and
MEMORY_MAX_TARGET values, or Oracle will throw the ORA-00845 error. Note that when changing
something with the mount command, the changes are not permanent.
To make the change persistent, edit your /etc/fstab file to include the option you specified above:
tmpfs /dev/shm tmpfs size=12g 0 0
SQL> startup nomount
ORACLE instance started.
Total System Global Area 1.1758E+10 bytes
Fixed Size 2239056 bytes
Variable Size 5939135920 bytes
Database Buffers 5804916736 bytes
Redo Buffers 12128256 bytes
ADDING SWAP SPACE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
dd if=/dev/zero of=/root/newswapfile bs=1M count=8198
chmod +x /root/newswapfile
mkswap /root/newswapfile
swapon /root/newswapfile
To make the change persistent, edit your /etc/fstab file to include the option you specified above:
4. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 4
vim /etc/fstab
/root/newswapfile swap swap defaults 0 0
Verify:
swapon -s
free -k
EDIT “/ETC/SYSCONFIG/NETWORK” AS ROOT USER ON BOTH NODES/RACS - (192.168.0.139 &
192.168.0.140)
NETWORKING=yes
HOSTNAME=kkcodb01
# Recommended value for NOZEROCONF
NOZEROCONF=yes
hostname kkcodb01
NETWORKING=yes
HOSTNAME=kkcodb02
# Recommended value for NOZEROCONF
NOZEROCONF=yes
hostname kkcodb02
UPDATE /ETC/HOSTS FILE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
Make sure that hosts file has right entries (remove or comment out lines with ipv6), make sure there is correct IP and
hostname, edit /etc/hosts as root:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
#public
192.168.0.139 kkcodb01 kkcodb01.example.com
192.168.0.140 kkcodb02 kkcodb02.example.com
#vip
192.168.0.143 kkcodb01-vip kkcodb01-vip.example.com
192.168.0.144 kkcodb02-vip kkcodb02-vip.example.com
#scan vip
#192.168.0.145 kkcodb-scan kkcodb-scan.example.com
#192.168.0.146 kkcodb-scan kkcodb-scan.example.com
#192.168.0.147 kkcodb-scan kkcodb-scan.example.com
#192.168.0.148 kkcodb-scan kkcodb-scan.example.com
#priv
10.75.40.143 kkcodb01-priv1 kkcodb01-priv1.example.com
10.75.40.144 kkcodb02-priv1 kkcodb02-priv2.example.com
5. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 5
BOND / TEAM MULTIPLE NETWORK INTERFACES (NIC) INTO A SINGLE INTERFACE ON BOTH
NODES/RACS - (192.168.0.139 & 192.168.0.140)
The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical
“bonded” interface. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes
provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed.
Modify eth0, eth1, eth3…. up to ethx config files to bond with bond0 & bond1
Create a bond0 Configuration File
vim /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.0.0
NETMASK=255.255.255.0
IPADDR=192.168.0.139
USERCTL=no
PEERDNS=no
BONDING_OPTS="mode=1 miimon=100"
vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
HWADDR=
TYPE=Ethernet
MASTER=bond0
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
USERCTL=no
vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
HWADDR=
TYPE=Ethernet
MASTER=bond0
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
USERCTL=no
Create a bond1 Configuration File
6. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 6
vim /etc/sysconfig/network-scripts/ifcfg-bond1
DEVICE=bond1
BOOTPROTO=none
ONBOOT=yes
NETWORK=10.75.40.0
NETMASK=255.255.255.0
IPADDR=10.75.40.143
USERCTL=no
PEERDNS=no
BONDING_OPTS="mode=1 miimon=100"
vim /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
HWADDR=
TYPE=Ethernet
MASTER=bond1
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
USERCTL=no
vim /etc/sysconfig/network-scripts/ifcfg-eth3
DEVICE=eth3
HWADDR=
TYPE=Ethernet
MASTER=bond1
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
USERCTL=no
vim /etc/modprobe.conf
alias bond0 bonding
alias bond1 bonding
CREATE USER AND GROUPS FOR ORACLE DATABASE AND GRID ON BOTH NODES/RACS -
(192.168.0.139 & 192.168.0.140)
groupadd -g 1000 oinstall
groupadd -g 1200 dba
useradd -u 1100 -g dba -G oinstall grid
useradd -u 1300 -g dba -G oinstall oracle
passwd grid
passwd oracle
7. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 7
mkdir -p /app/oracle
mkdir -p /app/12.1.0/grid
chown grid:dba /app
chown grid:dba /app/oracle
chown grid:dba /app/12.1.0
chown grid:dba /app/12.1.0/grid
chmod -R 775 /app
mkdir -p /u01 ; mkdir -p /u02 ; mkdir -p /u03
(Giving R/W/E permission for grid user in dba gruop)
chown grid:dba /u01
chown grid:dba /u02
chown grid:dba /u03
chmod +x /u01
chmod +x /u02
chmod +x /u03
or
(Giving R/W/E permission for gird/oracle - all users in dba gruop)
chgrp dba /u01
chgrp dba /u02
chgrp dba /u03
chmod g+swr /u01
chmod g+swr /u02
chmod g+swr /u03
SETTING UP ENVIRONMENT VARIABLES FOR OS ACCOUNTS: GRID AND ORACLE ON BOTH
NODES/RACS - (192.168.0.139 & 192.168.0.140)
@ the kkcodb01 as the gird user
su – grid
vim /home/grid/.bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
8. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 8
ORACLE_HOSTNAME=kkcodb01; export ORACLE_HOSTNAME
ORACLE_UNQNAME=RAC; export ORACLE_UNQNAME
ORACLE_BASE=/app/oracle; export ORACLE_BASE
GRID_HOME=/app/12.1.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME
ORACLE_HOME=$GRID_HOME; export ORACLE_HOME
ORACLE_SID=RAC1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export
CLASSPATH
umask 022
@ the kkcodb02 as the gird user
su – grid
vim /home/grid/.bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_HOSTNAME=kkcodb02; export ORACLE_HOSTNAME
ORACLE_UNQNAME=RAC; export ORACLE_UNQNAME
ORACLE_BASE=/app/oracle; export ORACLE_BASE
GRID_HOME=/app/12.1.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME
ORACLE_HOME=$GRID_HOME; export ORACLE_HOME
ORACLE_SID=RAC2; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export
CLASSPATH
umask 022
9. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 9
@ the kkcodb01 as the oracle user
su – oracle
vim /home/grid/.bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_HOSTNAME=kkcodb01; export ORACLE_HOSTNAME
ORACLE_UNQNAME=oradb; export ORACLE_UNQNAME
ORACLE_BASE=/app/oracle; export ORACLE_BASE
GRID_HOME=/app/12.1.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME
ORACLE_HOME=$DB_HOME; export ORACLE_HOME
ORACLE_SID=oradb1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export
CLASSPATH
umask 022
@ the kkcodb02 as the oracle user
su – oracle
vim /home/grid/.bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_HOSTNAME=kkcodb02; export ORACLE_HOSTNAME
ORACLE_UNQNAME=oradb; export ORACLE_UNQNAME
11. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 11
cat >> /etc/profile <<EOF
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
EOF
cat >> /etc/csh.login <<EOF
if ( $USER == "oracle" || $USER == "grid" )
then
limit maxproc 16384
limit descriptors 65536
endif
EOF
Execute the shutdown -r now on both nodes
DOWNLOADING ORACLE DATABASE AND GRID INFRASTRUCTURE SOFTWARE
You would have to download Oracle Database 12c Release 1 Grid Infrastructure (12.1.0.2.0) for Linux x86-64 –
here
Download – linuxamd64_12102_grid_1of2.zip
Download – linuxamd64_12102_grid_2of2.zip
Downloading and installing Oracle Database software
You would have to download Oracle Database 12c Release (12.1.0.2.0) for Linux x86-64 – here
Download – linuxamd64_12102_database_1of2.zip
Download – linuxamd64_12102_database_2of2.zip
Copy zip files to kkcodb01 server to /tmp directory using WinSCP
As a root user,
cd /tmp
chmod +x *.zip
for i in /tmp/linuxamd64_12102_grid_*.zip; do unzip $i -d /home/grid/stage; done
for i in /tmp/linuxamd64_12102_database_*.zip; do unzip $i -d /home/oracle/stage; done
12. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 12
INSTALL BIND TO CONFIGURE DNS SERVER ON 192.168.0.138 WHICH RESOLVES DOMAIN NAME
OR IP ADDRESS.
yum -y install bind bind-utils
Configure BIND.
vim /etc/named.conf
//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
acl "trusted" {
192.168.0.0/24;
10.75.40.0/24;
};
options {
listen-on port 53 { 127.0.0.1; 192.168.0.0/24; 10.75.40.0/24;};
#listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-transfer { any; };
allow-query { localhost; trusted; };
recursion yes;
dnssec-enable yes;
dnssec-validation yes;
/* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
};
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
zone "." IN {
type hint;
file "named.ca";
};
13. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 13
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
include "/etc/named/named.conf.local";
vim /etc/named/named.conf.local
zone "example.com" {
type master;
file "/etc/named/zones/db.example.com"; # zone file path
};
zone "0.192.in-addr.arpa" {
type master;
file "/etc/named/zones/db.192.0"; # 192.168.0.0/16
};
vim /etc/named/zones/db.example.com
$TTL 604800
@ IN SOA ns1.example.com. root.example.com. (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
; name servers - NS records
IN NS ns1.example.com.
; name servers - A records
ns1.example.com. IN A 192.168.0.139
; A records
kkcodb-scan IN A 192.168.0.145
kkcodb-scan IN A 192.168.0.146
kkcodb-scan IN A 192.168.0.147
kkcodb-scan IN A 192.168.0.148
;
kkcodb01-priv1 IN A 10.75.40.143
kkcodb02-priv1 IN A 10.75.40.144
;
kkcodb01 IN A 192.168.0.139
kkcodb02 IN A 192.168.0.140
;
nfs IN A 192.168.0.30
nfs-active IN A 10.75.40.31
nfs-pasive IN A 10.75.40.32
14. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 14
vim /etc/named/zones/db.192.0
$TTL 604800
@ IN SOA ns1.example.com. root.example.com. (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
; name servers - NS records
IN NS ns1.example.com.
; PTR Records
139.0 IN PTR ns1.example.com. ; 192.168.0.139
;
145.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.145
146.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.146
147.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.147
148.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.148
;
143.40 IN PTR kkcodb01-priv1.example.com. ; 10.75.40.143
144.40 IN PTR kkcodb02-priv2.example.com. ; 10.75.40.144
;
139.0 IN PTR kkcodb01.example.com. ; 192.168.0.139
140.0 IN PTR kkcodb02.example.com. ; 192.168.0.140
;
30.0 IN PTR nfs.example.com. ; 192.168.0.30
31.40 IN PTR nfs-active.example.com. ; 10.75.40.31
32.40 IN PTR nfs-pasive.example.com. ; 10.75.40.32
chkconfig named on
service named restart
named-checkzone 0.192.in-addr.arpa /etc/named/zones/db.192.0
@ ALL OF THEM ARE SHOULD BE CONFIGURING DNS CLIENT SETTING AS FOLLOWS
(INCLUDING BIND SERVER ALSO),
vim /etc/sysconfig/networking/profiles/default/resolv.conf
nameserver 192.168.0.138
search example.com
vim /etc/resolv.conf
nameserver 192.168.0.138
search example.com
service network restart
15. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 15
chkconfig NetworkManager off
service network restart
cat /etc/resolv.conf
ID DEVICE eth0, eth1…ethx DOES NOT SEEM TO BE PRESENT:
I was able to fix the problem by deleting the /etc/udev/rules.d/70-persistant-net.rules file
and restarting the virtual machine which generated a new file and got everything set up correctly.
Remove .ssh directory form individual users and restart the servers.
LOGICAL VOLUME MANAGEMENT – LVM ON NFS & NFS BACKUP KEEPER SERVERS
LVM is a logical volume manager for the Linux kernel that manages disk drives and similar mass-storage devices.
Heinz Mauelshagen wrote the original code in 1998, taking its primary design guidelines from the HP-UX's volume
manager.
The installers for the CrunchBang, CentOS, Debian, Fedora, Gentoo, Mandriva, MontaVista Linux, openSUSE,
Pardus, Red Hat Enterprise Linux, Slackware, SLED, SLES, Linux Mint, Kali Linux, and Ubuntu distributions are
LVM-aware and can install a bootable system with a root filesystem on a logical volume.
LVM IS COMMONLY USED FOR THE FOLLOWING PURPOSES:
1. Managing large hard disk farms by allowing disks to be added and replaced without downtime or service
disruption, in combination with hot swapping.
2. On small systems (like a desktop at home), instead of having to estimate at installation time how big a partition
might need to be in the future, LVM allows file systems to be easily resized later as needed.
3. Performing consistent backups by taking snapshots of the logical volumes.
4. Creating single logical volumes of multiple physical volumes or entire hard disks (somewhat similar to RAID
0, but more similar to JBOD), allowing for dynamic volume resizing.
5. the Ganeti solution stack relies on the Linux Logical Volume Manager
6. LVM can be considered as a thin software layer on top of the hard disks and partitions, which creates an
abstraction of continuity and ease-of-use for managing hard drive replacement, re-partitioning, and backup.
THE LVM CAN:
1. Resize volume groups online by absorbing new physical volumes (PV) or ejecting existing ones.
2. Resize logical volumes (LV) online by concatenating extents onto them or truncating extents from them.
3. Create read-only snapshots of logical volumes (LVM1).
4. Create read-write snapshots of logical volumes (LVM2).
5. Create RAID logical volumes (available in newer LVM implementations): RAID 1, RAID 5, RAID 6, etc.
6. Stripe whole or parts of logical volumes across multiple PVs, in a fashion similar to RAID 0.
7. Configure a RAID 1 backend device (a PV) as write-mostly, resulting in reads being avoided to such devices.
8. Allocate thin-provisioned logical volumes from a pool.
9. Move online logical volumes between PVs.
Split or merge volume groups in situ (as long as no logical volumes span the split).
This can be useful when migrating whole logical volumes to or from offline storage.
10. Create hybrid volumes by using the dm-cache target, which allows one or more fast storage devices, such as
flash-based solid-state drives (SSDs), to act as a cache for one or more slower hard disk drives (HDDs).
16. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 16
CREATE A PHYSICAL VOLUME
Input Command
pvcreate -ff /dev/sdb
Output
Physical volume "/dev/sdb" successfully created
DISPLAY A STATUS OF PHYSICAL VOLUMES
Input Command
pvdisplay /dev/sdb
Output
"/dev/sdb" is a new physical volume of "150.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb
VG Name
PV Size 150.00 GiB
CREATE A VOLUME GROUP
Input Command
vgcreate volg1 /dev/sdb
Output
Volume group "volg1" successfully created
DISPLAY VOLUME GROUPS
Input Command
vgdisplay
Output
--- Volume group ---
VG Name volg1
System ID
Format l vm2
17. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 17
VG Access read/write
VG Status resizable
VG Size 150.00 GiB
CREATE A LOGICAL VOLUME
Input Command
lvcreate -L 149G -n lv_data volg1
NOTE: create a Logical Volumes 'lv_data' as 150G in volume group 'vg_data'
Output
Logical volume "lv_data" created
DISPLAY STATUS OF LOGICAL VOLUMES
Input Command
lvdisplay
Output
--- Logical volume ---
LV Path /dev/volg1/lv_data
LV Name lv_data
VG Name vg_data
LV Write Access read/write
LV Status available
FORMATTING LOGICAL VOLUME BEFORE MOUNT IT.
Input Command
mkfs.ext4 /dev/volg1/lv_data
18. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 18
MOUNTING LOGICAL VOLUME INTO A SPECIFIC USER’S FOLDER
Input Command
mkdir -p /u01/VM/nfs_shares
mount /dev/volg1/lv_data /u01/VM/nfs_shares
vim /etc/fstab
/dev/volg1/lv_data /u01/VM/nfs_shares ext4 defaults 0 0
CONFIGURING LSYNCD BACKUP SERVER AS A BACKUP KEEPER FOR THE NFS SERVER
(CONFIGURE A SEPARATE NETWORK ADDRESS FOR THE BACKUP REPLICATION)
Lsyncd is a tool I discovered a few weeks ago, it is a synchronization server based primarily on Rsync. It is a server
daemon that runs on the “master” server, and it can sync / mirror any file or directory changes within seconds into
your “slaves” servers, you can have as many slave servers as you want. Lsyncd is constantly watching a local directory
and monitoring file system changes using inotify / fsevents.
By default, lsyncd uses rsync to send the data over the slave machines, however there are other ways to do it.
It does not require you to build new filesystems or block devices, and does not harm your server I/O performance.
yum -y install lua lua-devel pkgconfig gcc asciidoc lsyncd rsync
@ NFS SERVER (10.75.40.30/192.168.0.30)
vim /etc/lsyncd.conf
settings={
logfile="/var/log/lsyncd.log",
statusFile="/tmp/lsyncd.stat",
statusInterval=1,
}
sync{
default.rsync,
source="/u01/VM/nfs_shares",
target="192.168.0.31:/u01/VM/nfs_shares",
rsync={rsh="/usr/bin/ssh -l root -i /root/.ssh/id_rsa",}
}
rsync = {
compress = true,
acls = true,
verbose = true,
19. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 19
owner = true,
group = true,
perms = true,
rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no"
service lsyncd start
chkconfig lsyncd on
mkdir -p /var/log/lsyncd
GENERATE THE SSH PUBLIC KEYS BETWEEN NFS & NFS BACKUP KEEPER SERVERS
#!/bin/sh
echo "Are the both of the server’s reachable conditions satisfied? (y/n)"
read sslkeygen
case $sslkeygen in
y)
echo "Please enter the IP address of the Source Linux Server node."
read ipaddr1
echo "Please enter the IP address of the Destination Linux Server."
read ipaddr2
echo ""
echo "Generating SSH key..."
ssh-keygen -t rsa
echo ""
echo "Copying SSH key to the Destination Linux Server..."
echo "Please enter the root password for the Remote Linux Server."
ssh root@$ipaddr2 mkdir -p .ssh
cat /root/.ssh/id_rsa.pub | ssh root@$ipaddr2 'cat >> .ssh/authorized_keys'
ssh root@$ipaddr2 "chmod 700 .ssh; chmod 640 .ssh/authorized_keys"
echo ""
echo "SSH Key Authentication successfully set up ... continuing Next Linux RSA Key installation
form Remote server to Source Server..."
echo "Generating SSH key on Destination Server..."
ssh root@$ipaddr2 ssh-keygen -t rsa
echo ""
echo "Copying SSH key to the Destination Linux Server..."
echo "Please enter the root password for the Remote Linux Server."
mkdir -p .ssh
ssh root@$ipaddr2 cat /root/.ssh/id_rsa.pub | ssh root@$ipaddr1 'cat >> .ssh/authorized_keys'
chmod 700 .ssh; chmod 640 .ssh/authorized_keys
echo ""
;;
n)
echo "Root access must be enabled on the second machine...exiting!"
exit 1
;;
*)
echo "Unknown choice ... exiting!"
exit 2
esac
21. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 21
CONFIGURING NFS MOUNT POINTS ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
vim /etc/fstab
10.75.40.30:/u01/VM/nfs_shares/shared_1 /u01 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
10.75.40.30:/u01/VM/nfs_shares/shared_2 /u02 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
10.75.40.30:/u01/VM/nfs_shares/shared_3 /u03 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
mount /u01
mount /u02
mount /u03
df -hT
INSTALLING ORACLE GRID ON BOTH NODES FROM 192.168.0.139
32. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 32
INSTALLING ORACLE DATABASE ON BOTH NODES FROM 192.168.0.139