SlideShare uma empresa Scribd logo
1 de 45
Baixar para ler offline
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 1
ORACLE CLUSTER INSTALLTION WITH GRID, KEEPALIVE
& NFS HIGH AVAILABILITY – 12C RAC
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 2
SETTING UP PRE-REQUIREMENTS
Date:
date -s "9 AUG 2013 11:32:08"
SETTING UP EPEL REPOSITORY ON ALL THE SERVERS
yum install http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm -y
INSTALLING ORACLE ASMLIB PACKAGE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
cd /etc/yum.repos.d ; wget https://public-yum.oracle.com/public-yum-ol6.repo --no-check-certificate
wget http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
yum install kernel-uek-devel* kernel-devel oracleasm oracleasm-support elfutils-libelf-devel kmod-oracleasm
oracleasmlib tcpdump htop -y
yum install oracleasmlib-2.0.12-1.el6.x86_64.rpm
INSTALLING ORACLE GRID AND DATABASE PRE-REQUIREMENTS ON BOTH NODES/RACS -
(192.168.0.139 & 192.168.0.140)
yum install binutils-2.* elfutils-libelf-0.* glibc-2.* glibc-common-2.* ksh-2* libaio-0.* libgcc-4.* libstdc++-4.*
make-3.* elfutils-libelf-devel-* gcc-4.* gcc-c++-4.* glibc-devel-2.* glibc-headers-2.* libstdc++-devel-4.*
unixODBC-2.* compat-libstdc++-33* libaio-devel-0.* unixODBC-devel-2.* sysstat-7.* -y
INSTALLING BIND PRE-REQUIREMENTS ON DNS SERVER - (192.168.0.110)
yum -y install bind bind-utils
INSTALLING NFS SERVER PRE-REQUIREMENTS (10.75.40.31 & 10.75.40.32)
yum -y install nfs-utils
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 3
TO OVERCOME ORA-00845: MEMORY_TARGET NOT SUPPORTED ON BOTH NODES/RACS -
(192.168.0.139 & 192.168.0.140)
SQL> startup nomount;
ORA-00845: MEMORY_TARGET not supported on this system
This error comes up because you tried to use the Automatic Memory Management (AMM) feature of Oracle 12C.
It seems that your shared memory filesystem (shmfs) is not big enough and enlarge your shared memory filesystem
to avoid the error above.
First of all, login as root and have a look at the filesystem:
df -hT
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_oracleem-lv_root
93G 19G 69G 22% /
tmpfs 5.9G 112K 5.9G 1% /dev/shm
/dev/sda1 485M 99M 362M 22% /boot
We can see that tmpfs has a size of 6GB. We can change the size of that filesystem by issuing the following
command (where “12g” is the size I want for my MEMORY_TARGET):
mount -t tmpfs shmfs -o size=12g /dev/shm
The shared memory file system should be big enough to accommodate the MEMORY_TARGET and
MEMORY_MAX_TARGET values, or Oracle will throw the ORA-00845 error. Note that when changing
something with the mount command, the changes are not permanent.
To make the change persistent, edit your /etc/fstab file to include the option you specified above:
tmpfs /dev/shm tmpfs size=12g 0 0
SQL> startup nomount
ORACLE instance started.
Total System Global Area 1.1758E+10 bytes
Fixed Size 2239056 bytes
Variable Size 5939135920 bytes
Database Buffers 5804916736 bytes
Redo Buffers 12128256 bytes
ADDING SWAP SPACE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
dd if=/dev/zero of=/root/newswapfile bs=1M count=8198
chmod +x /root/newswapfile
mkswap /root/newswapfile
swapon /root/newswapfile
To make the change persistent, edit your /etc/fstab file to include the option you specified above:
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 4
vim /etc/fstab
/root/newswapfile swap swap defaults 0 0
Verify:
swapon -s
free -k
EDIT “/ETC/SYSCONFIG/NETWORK” AS ROOT USER ON BOTH NODES/RACS - (192.168.0.139 &
192.168.0.140)
NETWORKING=yes
HOSTNAME=kkcodb01
# Recommended value for NOZEROCONF
NOZEROCONF=yes
hostname kkcodb01
NETWORKING=yes
HOSTNAME=kkcodb02
# Recommended value for NOZEROCONF
NOZEROCONF=yes
hostname kkcodb02
UPDATE /ETC/HOSTS FILE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
Make sure that hosts file has right entries (remove or comment out lines with ipv6), make sure there is correct IP and
hostname, edit /etc/hosts as root:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
#public
192.168.0.139 kkcodb01 kkcodb01.example.com
192.168.0.140 kkcodb02 kkcodb02.example.com
#vip
192.168.0.143 kkcodb01-vip kkcodb01-vip.example.com
192.168.0.144 kkcodb02-vip kkcodb02-vip.example.com
#scan vip
#192.168.0.145 kkcodb-scan kkcodb-scan.example.com
#192.168.0.146 kkcodb-scan kkcodb-scan.example.com
#192.168.0.147 kkcodb-scan kkcodb-scan.example.com
#192.168.0.148 kkcodb-scan kkcodb-scan.example.com
#priv
10.75.40.143 kkcodb01-priv1 kkcodb01-priv1.example.com
10.75.40.144 kkcodb02-priv1 kkcodb02-priv2.example.com
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 5
BOND / TEAM MULTIPLE NETWORK INTERFACES (NIC) INTO A SINGLE INTERFACE ON BOTH
NODES/RACS - (192.168.0.139 & 192.168.0.140)
The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical
“bonded” interface. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes
provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed.
Modify eth0, eth1, eth3…. up to ethx config files to bond with bond0 & bond1
Create a bond0 Configuration File
vim /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.0.0
NETMASK=255.255.255.0
IPADDR=192.168.0.139
USERCTL=no
PEERDNS=no
BONDING_OPTS="mode=1 miimon=100"
vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
HWADDR=
TYPE=Ethernet
MASTER=bond0
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
USERCTL=no
vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
HWADDR=
TYPE=Ethernet
MASTER=bond0
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
USERCTL=no
Create a bond1 Configuration File
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 6
vim /etc/sysconfig/network-scripts/ifcfg-bond1
DEVICE=bond1
BOOTPROTO=none
ONBOOT=yes
NETWORK=10.75.40.0
NETMASK=255.255.255.0
IPADDR=10.75.40.143
USERCTL=no
PEERDNS=no
BONDING_OPTS="mode=1 miimon=100"
vim /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
HWADDR=
TYPE=Ethernet
MASTER=bond1
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
USERCTL=no
vim /etc/sysconfig/network-scripts/ifcfg-eth3
DEVICE=eth3
HWADDR=
TYPE=Ethernet
MASTER=bond1
SLAVE=yes
ONBOOT=yes
BOOTPROTO=none
IPV6INIT=no
USERCTL=no
vim /etc/modprobe.conf
alias bond0 bonding
alias bond1 bonding
CREATE USER AND GROUPS FOR ORACLE DATABASE AND GRID ON BOTH NODES/RACS -
(192.168.0.139 & 192.168.0.140)
groupadd -g 1000 oinstall
groupadd -g 1200 dba
useradd -u 1100 -g dba -G oinstall grid
useradd -u 1300 -g dba -G oinstall oracle
passwd grid
passwd oracle
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 7
mkdir -p /app/oracle
mkdir -p /app/12.1.0/grid
chown grid:dba /app
chown grid:dba /app/oracle
chown grid:dba /app/12.1.0
chown grid:dba /app/12.1.0/grid
chmod -R 775 /app
mkdir -p /u01 ; mkdir -p /u02 ; mkdir -p /u03
(Giving R/W/E permission for grid user in dba gruop)
chown grid:dba /u01
chown grid:dba /u02
chown grid:dba /u03
chmod +x /u01
chmod +x /u02
chmod +x /u03
or
(Giving R/W/E permission for gird/oracle - all users in dba gruop)
chgrp dba /u01
chgrp dba /u02
chgrp dba /u03
chmod g+swr /u01
chmod g+swr /u02
chmod g+swr /u03
SETTING UP ENVIRONMENT VARIABLES FOR OS ACCOUNTS: GRID AND ORACLE ON BOTH
NODES/RACS - (192.168.0.139 & 192.168.0.140)
@ the kkcodb01 as the gird user
su – grid
vim /home/grid/.bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 8
ORACLE_HOSTNAME=kkcodb01; export ORACLE_HOSTNAME
ORACLE_UNQNAME=RAC; export ORACLE_UNQNAME
ORACLE_BASE=/app/oracle; export ORACLE_BASE
GRID_HOME=/app/12.1.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME
ORACLE_HOME=$GRID_HOME; export ORACLE_HOME
ORACLE_SID=RAC1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export
CLASSPATH
umask 022
@ the kkcodb02 as the gird user
su – grid
vim /home/grid/.bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_HOSTNAME=kkcodb02; export ORACLE_HOSTNAME
ORACLE_UNQNAME=RAC; export ORACLE_UNQNAME
ORACLE_BASE=/app/oracle; export ORACLE_BASE
GRID_HOME=/app/12.1.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME
ORACLE_HOME=$GRID_HOME; export ORACLE_HOME
ORACLE_SID=RAC2; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export
CLASSPATH
umask 022
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 9
@ the kkcodb01 as the oracle user
su – oracle
vim /home/grid/.bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_HOSTNAME=kkcodb01; export ORACLE_HOSTNAME
ORACLE_UNQNAME=oradb; export ORACLE_UNQNAME
ORACLE_BASE=/app/oracle; export ORACLE_BASE
GRID_HOME=/app/12.1.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME
ORACLE_HOME=$DB_HOME; export ORACLE_HOME
ORACLE_SID=oradb1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export
CLASSPATH
umask 022
@ the kkcodb02 as the oracle user
su – oracle
vim /home/grid/.bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR
ORACLE_HOSTNAME=kkcodb02; export ORACLE_HOSTNAME
ORACLE_UNQNAME=oradb; export ORACLE_UNQNAME
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 10
ORACLE_BASE=/app/oracle; export ORACLE_BASE
GRID_HOME=/app/12.1.0/grid; export GRID_HOME
DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME
ORACLE_HOME=$DB_HOME; export ORACLE_HOME
ORACLE_SID=oradb2; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
BASE_PATH=/usr/sbin:$PATH; export BASE_PATH
PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export
CLASSPATH
umask 022
KERNEL PARAMETERS ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
MEMTOTAL=$(free -b | sed -n '2p' | awk '{print $2}')
SHMMAX=$(expr $MEMTOTAL / 2)
SHMMNI=4096
PAGESIZE=$(getconf PAGE_SIZE)
cat >> /etc/sysctl.conf << EOF
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmmax = $SHMMAX
kernel.shmall = `expr ( $SHMMAX / $PAGESIZE ) * ( $SHMMNI / 16 )`
kernel.shmmni = $SHMMNI
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
EOF
cat >> /etc/security/limits.conf <<EOF
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle hard memlock 5437300
EOF
cat >> /etc/pam.d/login <<EOF
session required pam_limits.so
EOF
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 11
cat >> /etc/profile <<EOF
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
EOF
cat >> /etc/csh.login <<EOF
if ( $USER == "oracle" || $USER == "grid" )
then
limit maxproc 16384
limit descriptors 65536
endif
EOF
Execute the shutdown -r now on both nodes
DOWNLOADING ORACLE DATABASE AND GRID INFRASTRUCTURE SOFTWARE
You would have to download Oracle Database 12c Release 1 Grid Infrastructure (12.1.0.2.0) for Linux x86-64 –
here
Download – linuxamd64_12102_grid_1of2.zip
Download – linuxamd64_12102_grid_2of2.zip
Downloading and installing Oracle Database software
You would have to download Oracle Database 12c Release (12.1.0.2.0) for Linux x86-64 – here
Download – linuxamd64_12102_database_1of2.zip
Download – linuxamd64_12102_database_2of2.zip
Copy zip files to kkcodb01 server to /tmp directory using WinSCP
As a root user,
cd /tmp
chmod +x *.zip
for i in /tmp/linuxamd64_12102_grid_*.zip; do unzip $i -d /home/grid/stage; done
for i in /tmp/linuxamd64_12102_database_*.zip; do unzip $i -d /home/oracle/stage; done
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 12
INSTALL BIND TO CONFIGURE DNS SERVER ON 192.168.0.138 WHICH RESOLVES DOMAIN NAME
OR IP ADDRESS.
yum -y install bind bind-utils
Configure BIND.
vim /etc/named.conf
//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
acl "trusted" {
192.168.0.0/24;
10.75.40.0/24;
};
options {
listen-on port 53 { 127.0.0.1; 192.168.0.0/24; 10.75.40.0/24;};
#listen-on-v6 port 53 { ::1; };
directory "/var/named";
dump-file "/var/named/data/cache_dump.db";
statistics-file "/var/named/data/named_stats.txt";
memstatistics-file "/var/named/data/named_mem_stats.txt";
allow-transfer { any; };
allow-query { localhost; trusted; };
recursion yes;
dnssec-enable yes;
dnssec-validation yes;
/* Path to ISC DLV key */
bindkeys-file "/etc/named.iscdlv.key";
managed-keys-directory "/var/named/dynamic";
};
logging {
channel default_debug {
file "data/named.run";
severity dynamic;
};
};
zone "." IN {
type hint;
file "named.ca";
};
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 13
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";
include "/etc/named/named.conf.local";
vim /etc/named/named.conf.local
zone "example.com" {
type master;
file "/etc/named/zones/db.example.com"; # zone file path
};
zone "0.192.in-addr.arpa" {
type master;
file "/etc/named/zones/db.192.0"; # 192.168.0.0/16
};
vim /etc/named/zones/db.example.com
$TTL 604800
@ IN SOA ns1.example.com. root.example.com. (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
; name servers - NS records
IN NS ns1.example.com.
; name servers - A records
ns1.example.com. IN A 192.168.0.139
; A records
kkcodb-scan IN A 192.168.0.145
kkcodb-scan IN A 192.168.0.146
kkcodb-scan IN A 192.168.0.147
kkcodb-scan IN A 192.168.0.148
;
kkcodb01-priv1 IN A 10.75.40.143
kkcodb02-priv1 IN A 10.75.40.144
;
kkcodb01 IN A 192.168.0.139
kkcodb02 IN A 192.168.0.140
;
nfs IN A 192.168.0.30
nfs-active IN A 10.75.40.31
nfs-pasive IN A 10.75.40.32
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 14
vim /etc/named/zones/db.192.0
$TTL 604800
@ IN SOA ns1.example.com. root.example.com. (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
; name servers - NS records
IN NS ns1.example.com.
; PTR Records
139.0 IN PTR ns1.example.com. ; 192.168.0.139
;
145.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.145
146.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.146
147.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.147
148.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.148
;
143.40 IN PTR kkcodb01-priv1.example.com. ; 10.75.40.143
144.40 IN PTR kkcodb02-priv2.example.com. ; 10.75.40.144
;
139.0 IN PTR kkcodb01.example.com. ; 192.168.0.139
140.0 IN PTR kkcodb02.example.com. ; 192.168.0.140
;
30.0 IN PTR nfs.example.com. ; 192.168.0.30
31.40 IN PTR nfs-active.example.com. ; 10.75.40.31
32.40 IN PTR nfs-pasive.example.com. ; 10.75.40.32
chkconfig named on
service named restart
named-checkzone 0.192.in-addr.arpa /etc/named/zones/db.192.0
@ ALL OF THEM ARE SHOULD BE CONFIGURING DNS CLIENT SETTING AS FOLLOWS
(INCLUDING BIND SERVER ALSO),
vim /etc/sysconfig/networking/profiles/default/resolv.conf
nameserver 192.168.0.138
search example.com
vim /etc/resolv.conf
nameserver 192.168.0.138
search example.com
service network restart
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 15
chkconfig NetworkManager off
service network restart
cat /etc/resolv.conf
ID DEVICE eth0, eth1…ethx DOES NOT SEEM TO BE PRESENT:
I was able to fix the problem by deleting the /etc/udev/rules.d/70-persistant-net.rules file
and restarting the virtual machine which generated a new file and got everything set up correctly.
Remove .ssh directory form individual users and restart the servers.
LOGICAL VOLUME MANAGEMENT – LVM ON NFS & NFS BACKUP KEEPER SERVERS
LVM is a logical volume manager for the Linux kernel that manages disk drives and similar mass-storage devices.
Heinz Mauelshagen wrote the original code in 1998, taking its primary design guidelines from the HP-UX's volume
manager.
The installers for the CrunchBang, CentOS, Debian, Fedora, Gentoo, Mandriva, MontaVista Linux, openSUSE,
Pardus, Red Hat Enterprise Linux, Slackware, SLED, SLES, Linux Mint, Kali Linux, and Ubuntu distributions are
LVM-aware and can install a bootable system with a root filesystem on a logical volume.
LVM IS COMMONLY USED FOR THE FOLLOWING PURPOSES:
1. Managing large hard disk farms by allowing disks to be added and replaced without downtime or service
disruption, in combination with hot swapping.
2. On small systems (like a desktop at home), instead of having to estimate at installation time how big a partition
might need to be in the future, LVM allows file systems to be easily resized later as needed.
3. Performing consistent backups by taking snapshots of the logical volumes.
4. Creating single logical volumes of multiple physical volumes or entire hard disks (somewhat similar to RAID
0, but more similar to JBOD), allowing for dynamic volume resizing.
5. the Ganeti solution stack relies on the Linux Logical Volume Manager
6. LVM can be considered as a thin software layer on top of the hard disks and partitions, which creates an
abstraction of continuity and ease-of-use for managing hard drive replacement, re-partitioning, and backup.
THE LVM CAN:
1. Resize volume groups online by absorbing new physical volumes (PV) or ejecting existing ones.
2. Resize logical volumes (LV) online by concatenating extents onto them or truncating extents from them.
3. Create read-only snapshots of logical volumes (LVM1).
4. Create read-write snapshots of logical volumes (LVM2).
5. Create RAID logical volumes (available in newer LVM implementations): RAID 1, RAID 5, RAID 6, etc.
6. Stripe whole or parts of logical volumes across multiple PVs, in a fashion similar to RAID 0.
7. Configure a RAID 1 backend device (a PV) as write-mostly, resulting in reads being avoided to such devices.
8. Allocate thin-provisioned logical volumes from a pool.
9. Move online logical volumes between PVs.
Split or merge volume groups in situ (as long as no logical volumes span the split).
This can be useful when migrating whole logical volumes to or from offline storage.
10. Create hybrid volumes by using the dm-cache target, which allows one or more fast storage devices, such as
flash-based solid-state drives (SSDs), to act as a cache for one or more slower hard disk drives (HDDs).
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 16
CREATE A PHYSICAL VOLUME
Input Command
pvcreate -ff /dev/sdb
Output
Physical volume "/dev/sdb" successfully created
DISPLAY A STATUS OF PHYSICAL VOLUMES
Input Command
pvdisplay /dev/sdb
Output
"/dev/sdb" is a new physical volume of "150.00 GiB"
--- NEW Physical volume ---
PV Name /dev/sdb
VG Name
PV Size 150.00 GiB
CREATE A VOLUME GROUP
Input Command
vgcreate volg1 /dev/sdb
Output
Volume group "volg1" successfully created
DISPLAY VOLUME GROUPS
Input Command
vgdisplay
Output
--- Volume group ---
VG Name volg1
System ID
Format l vm2
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 17
VG Access read/write
VG Status resizable
VG Size 150.00 GiB
CREATE A LOGICAL VOLUME
Input Command
lvcreate -L 149G -n lv_data volg1
NOTE: create a Logical Volumes 'lv_data' as 150G in volume group 'vg_data'
Output
Logical volume "lv_data" created
DISPLAY STATUS OF LOGICAL VOLUMES
Input Command
lvdisplay
Output
--- Logical volume ---
LV Path /dev/volg1/lv_data
LV Name lv_data
VG Name vg_data
LV Write Access read/write
LV Status available
FORMATTING LOGICAL VOLUME BEFORE MOUNT IT.
Input Command
mkfs.ext4 /dev/volg1/lv_data
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 18
MOUNTING LOGICAL VOLUME INTO A SPECIFIC USER’S FOLDER
Input Command
mkdir -p /u01/VM/nfs_shares
mount /dev/volg1/lv_data /u01/VM/nfs_shares
vim /etc/fstab
/dev/volg1/lv_data /u01/VM/nfs_shares ext4 defaults 0 0
CONFIGURING LSYNCD BACKUP SERVER AS A BACKUP KEEPER FOR THE NFS SERVER
(CONFIGURE A SEPARATE NETWORK ADDRESS FOR THE BACKUP REPLICATION)
Lsyncd is a tool I discovered a few weeks ago, it is a synchronization server based primarily on Rsync. It is a server
daemon that runs on the “master” server, and it can sync / mirror any file or directory changes within seconds into
your “slaves” servers, you can have as many slave servers as you want. Lsyncd is constantly watching a local directory
and monitoring file system changes using inotify / fsevents.
By default, lsyncd uses rsync to send the data over the slave machines, however there are other ways to do it.
It does not require you to build new filesystems or block devices, and does not harm your server I/O performance.
yum -y install lua lua-devel pkgconfig gcc asciidoc lsyncd rsync
@ NFS SERVER (10.75.40.30/192.168.0.30)
vim /etc/lsyncd.conf
settings={
logfile="/var/log/lsyncd.log",
statusFile="/tmp/lsyncd.stat",
statusInterval=1,
}
sync{
default.rsync,
source="/u01/VM/nfs_shares",
target="192.168.0.31:/u01/VM/nfs_shares",
rsync={rsh="/usr/bin/ssh -l root -i /root/.ssh/id_rsa",}
}
rsync = {
compress = true,
acls = true,
verbose = true,
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 19
owner = true,
group = true,
perms = true,
rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no"
service lsyncd start
chkconfig lsyncd on
mkdir -p /var/log/lsyncd
GENERATE THE SSH PUBLIC KEYS BETWEEN NFS & NFS BACKUP KEEPER SERVERS
#!/bin/sh
echo "Are the both of the server’s reachable conditions satisfied? (y/n)"
read sslkeygen
case $sslkeygen in
y)
echo "Please enter the IP address of the Source Linux Server node."
read ipaddr1
echo "Please enter the IP address of the Destination Linux Server."
read ipaddr2
echo ""
echo "Generating SSH key..."
ssh-keygen -t rsa
echo ""
echo "Copying SSH key to the Destination Linux Server..."
echo "Please enter the root password for the Remote Linux Server."
ssh root@$ipaddr2 mkdir -p .ssh
cat /root/.ssh/id_rsa.pub | ssh root@$ipaddr2 'cat >> .ssh/authorized_keys'
ssh root@$ipaddr2 "chmod 700 .ssh; chmod 640 .ssh/authorized_keys"
echo ""
echo "SSH Key Authentication successfully set up ... continuing Next Linux RSA Key installation
form Remote server to Source Server..."
echo "Generating SSH key on Destination Server..."
ssh root@$ipaddr2 ssh-keygen -t rsa
echo ""
echo "Copying SSH key to the Destination Linux Server..."
echo "Please enter the root password for the Remote Linux Server."
mkdir -p .ssh
ssh root@$ipaddr2 cat /root/.ssh/id_rsa.pub | ssh root@$ipaddr1 'cat >> .ssh/authorized_keys'
chmod 700 .ssh; chmod 640 .ssh/authorized_keys
echo ""
;;
n)
echo "Root access must be enabled on the second machine...exiting!"
exit 1
;;
*)
echo "Unknown choice ... exiting!"
exit 2
esac
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 20
CONFIGURING NFS SERVER (10.75.40.30 / 192.168.0.30)
groupadd -g 1000 oinstall
groupadd -g 1200 dba
useradd -u 1100 -g dba -G oinstall grid
useradd -u 1300 -g dba -G oinstall oracle
passwd grid
passwd oracle
mkdir -p /u01/VM/nfs_shares/shared_1
mkdir -p /u01/VM/nfs_shares/shared_2
mkdir -p /u01/VM/nfs_shares/shared_3
chown grid:dba /u01/VM/nfs_shares/shared_1
chown grid:dba /u01/VM/nfs_shares/shared_2
chown grid:dba /u01/VM/nfs_shares/shared_3
chmod +x /u01/VM/nfs_shares/shared_1
chmod +x /u01/VM/nfs_shares/shared_2
chmod +x /u01/VM/nfs_shares/shared_3
vim /etc/exports
OPTIMISED BY 50% (ASYNC):
/u01/VM/nfs_shares/shared_1 *(rw,async,no_wdelay,insecure_locks,no_root_squash)
/u01/VM/nfs_shares/shared_2 *(rw,async,no_wdelay,insecure_locks,no_root_squash)
/u01/VM/nfs_shares/shared_3 *(rw,async,no_wdelay,insecure_locks,no_root_squash)
chkconfig nfs on
service nfs restart
showmount -e 10.75.40.30
NFS MASTER RESULT:
/u01/VM/nfs_shares/shared_3 *
/u01/VM/nfs_shares/shared_2 *
/u01/VM/nfs_shares/shared_1 *
showmount -e 10.75.40.32
NFS SLAVE RESULT:
/u01/VM/nfs_shares/shared_3 *
/u01/VM/nfs_shares/shared_2 *
/u01/VM/nfs_shares/shared_1 *
showmount -e 10.75.40.30
NFS VIRTUAL IP RESULT:
/u01/VM/nfs_shares/shared_3 *
/u01/VM/nfs_shares/shared_2 *
/u01/VM/nfs_shares/shared_1 *
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 21
CONFIGURING NFS MOUNT POINTS ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140)
vim /etc/fstab
10.75.40.30:/u01/VM/nfs_shares/shared_1 /u01 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
10.75.40.30:/u01/VM/nfs_shares/shared_2 /u02 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
10.75.40.30:/u01/VM/nfs_shares/shared_3 /u03 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
mount /u01
mount /u02
mount /u03
df -hT
INSTALLING ORACLE GRID ON BOTH NODES FROM 192.168.0.139
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 22
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 23
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 24
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 25
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 26
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 27
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 28
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 29
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 30
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 31
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 32
INSTALLING ORACLE DATABASE ON BOTH NODES FROM 192.168.0.139
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 33
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 34
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 35
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 36
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 37
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 38
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 39
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 40
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 41
ADMINISTRATING THE GRID INSFRASTUCTURE ON BOTH OF RAC NODES (as a grid user)
su - grid
crsctl status resource -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE kkcodb01 STABLE
ONLINE ONLINE kkcodb02 STABLE
ora.asm
OFFLINE OFFLINE kkcodb01 Instance Shutdown,ST
ABLE
OFFLINE OFFLINE kkcodb02 STABLE
ora.net1.network
ONLINE ONLINE kkcodb01 STABLE
ONLINE ONLINE kkcodb02 STABLE
ora.ons
ONLINE ONLINE kkcodb01 STABLE
ONLINE ONLINE kkcodb02 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE kkcodb01 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE kkcodb01 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE kkcodb01 STABLE
ora.LISTENER_SCAN4.lsnr
1 ONLINE ONLINE kkcodb01 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE kkcodb01 169.254.225.48 10.75
.40.143,STABLE
ora.cvu
1 ONLINE ONLINE kkcodb01 STABLE
ora.kkcodb01.vip
1 ONLINE ONLINE kkcodb01 STABLE
ora.kkcodb02.vip
1 ONLINE ONLINE kkcodb02 STABLE
ora.mgmtdb
1 ONLINE ONLINE kkcodb01 Open,STABLE
ora.oc4j
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 42
1 ONLINE ONLINE kkcodb01 STABLE
ora.oradb.db
1 ONLINE ONLINE kkcodb01 Open,STABLE
2 ONLINE ONLINE kkcodb02 Open,STABLE
ora.scan1.vip
1 ONLINE ONLINE kkcodb01 STABLE
ora.scan2.vip
1 ONLINE ONLINE kkcodb01 STABLE
ora.scan3.vip
1 ONLINE ONLINE kkcodb01 STABLE
ora.scan4.vip
1 ONLINE ONLINE kkcodb01 STABLE
srvctl status instance -db oradb -node kkcodb01
srvctl status instance -db oradb -node kkcodb02
Instance oradb1 is running on node kkcodb01
Instance oradb2 is running on node kkcodb02
srvctl status instance -d oradb -i oradb1
srvctl status instance -d oradb -i oradb2
Instance oradb1 is running on node kkcodb01
Instance oradb2 is running on node kkcodb02
SUMMARY OF THE MOST IMPORTANT COMMANDS TO RAISE / STOP / CHECK CLUSTER
RESOURCES
crsctl check crs
crsctl check cluster -n kkcodb01
crsctl check ctss
crsctl config crs (requiere root)
cat /etc/oracle/scls_scr/rac1/root/ohasdstr
crsctl stat res -t
crsctl stat res ora.rac.db -p
crsctl stat res ora.rac.db -f
crsctl query css votedisk
olsnodes -n -i -s -t
oifcfg getif
ocrcheck
ocrcheck -local (requiere root)
ocrconfig -showbackup
ocrconfig -add +TEST
cluvfy comp crs -n rac1
srvctl status database -d oradb
srvctl status instance -d oradb -i kkcodb01
srvctl status service -d oradb
srvctl status nodeapps
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 43
srvctl status vip -n kkcodb01
srvctl status listener -l LISTENER
srvctl status asm -n kkcodb01
srvctl status scan
srvctl status scan_listener
srvctl status server -n kkcodb01
srvctl status diskgroup -g DGRAC
srvctl config database -d oradb
srvctl config service -d oradb
srvctl config nodeapps
srvctl config vip -n kkcodb01
srvctl config asm -a
srvctl config listener -l LISTENER
srvctl config scan
srvctl config scan_listener
crsctl stop cluster
crsctl start cluster
crsctl stop crs
crsctl start crs
crsctl disable
crsctl disable
srvctl stop database -d oradb -o immediate
srvctl start database -d oradb
srvctl stop instance -d oradb -i kkcodb01 -o immediate
srvctl start instance -d oradb -i kkcodb01
srvctl stop service -d oradb -s OLTP -n kkcodb01
srvctl sart service -d oradb -s OLTP
srvctl stop nodeapps -n kkcodb01
srvctl start nodeapps
srvctl stop vip -n rac1
srvctl start vip -n rac1
srvctl stop asm -n rac1 -o abort -f
srvctl start asm -n rac1
srvctl stop listener -l LISTENER
srvctl start listener -l LISTENER
srvctl stop scan -i 1
srvctl start scan -i 1
srvctl stop scan_listener -i 1
srvctl start scan_listener -i 1
srvctl stop diskgroup -g TEST -n kkcodb01, kkcodb02
srvctl start diskgroup -g TEST -n kkcodb01, kkcodb02
srvctl relocate service -d RAC -s OLTP -i kkcodb01 -t kkcodb02
srvctl relocate scan_listener -i 1 rac1
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 44
DELETING THE GRID INSFRASTUCTURE ON BOTH OF RAC NODES
Login as grid user:
Removin GRID
$ORACLE_HOME/deinstall/deinstall
Removing Brocken Grid Installation:
On all cluster nodes except the last, run the following command as the "root" user.
perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force
perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force
Cleanup steps for cluster node removal for following scenarios:
rm -rf /app/*
#rm -rf /u01/* ; rm -rf /u02/* ; rm -rf /u03/*
rm -rf /etc/oracle/*
rm -rf /etc/oraInst.loc
rm -rf /etc/oratab
rm -rf /var/tmp/.oracle/*
rm -rf /home/oracle/.ssh/
rm -rf /home/grid/.ssh/
shutdown -r now
CONNECTING WITH ORACLE SQL DEVELOPER
ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016
GRID RAC Page 45

Mais conteúdo relacionado

Mais procurados

High Availability Server with DRBD in linux
High Availability Server with DRBD in linuxHigh Availability Server with DRBD in linux
High Availability Server with DRBD in linuxAli Rachman
 
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...NETWAYS
 
Docker and friends at Linux Days 2014 in Prague
Docker and friends at Linux Days 2014 in PragueDocker and friends at Linux Days 2014 in Prague
Docker and friends at Linux Days 2014 in Praguetomasbart
 
My sql fabric ha and sharding solutions
My sql fabric ha and sharding solutionsMy sql fabric ha and sharding solutions
My sql fabric ha and sharding solutionsLouis liu
 
Postgres-BDR with Google Cloud Platform
Postgres-BDR with Google Cloud PlatformPostgres-BDR with Google Cloud Platform
Postgres-BDR with Google Cloud PlatformSungJae Yun
 
Как понять, что происходит на сервере? / Александр Крижановский (NatSys Lab.,...
Как понять, что происходит на сервере? / Александр Крижановский (NatSys Lab.,...Как понять, что происходит на сервере? / Александр Крижановский (NatSys Lab.,...
Как понять, что происходит на сервере? / Александр Крижановский (NatSys Lab.,...Ontico
 
Advanced RAC troubleshooting: Network
Advanced RAC troubleshooting: NetworkAdvanced RAC troubleshooting: Network
Advanced RAC troubleshooting: NetworkRiyaj Shamsudeen
 
Introduction to eBPF and XDP
Introduction to eBPF and XDPIntroduction to eBPF and XDP
Introduction to eBPF and XDPlcplcp1
 
HADOOP 실제 구성 사례, Multi-Node 구성
HADOOP 실제 구성 사례, Multi-Node 구성HADOOP 실제 구성 사례, Multi-Node 구성
HADOOP 실제 구성 사례, Multi-Node 구성Young Pyo
 
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed_Hat_Storage
 
Introduction to Stacki at Atlanta Meetup February 2016
Introduction to Stacki at Atlanta Meetup February 2016Introduction to Stacki at Atlanta Meetup February 2016
Introduction to Stacki at Atlanta Meetup February 2016StackIQ
 
1 m+ qps on mysql galera cluster
1 m+ qps on mysql galera cluster1 m+ qps on mysql galera cluster
1 m+ qps on mysql galera clusterOlinData
 
Cpu高效编程技术
Cpu高效编程技术Cpu高效编程技术
Cpu高效编程技术Feng Yu
 
Debugging Ruby
Debugging RubyDebugging Ruby
Debugging RubyAman Gupta
 
Мастер-класс "Логическая репликация и Avito" / Константин Евтеев, Михаил Тюр...
Мастер-класс "Логическая репликация и Avito" / Константин Евтеев,  Михаил Тюр...Мастер-класс "Логическая репликация и Avito" / Константин Евтеев,  Михаил Тюр...
Мастер-класс "Логическая репликация и Avito" / Константин Евтеев, Михаил Тюр...Ontico
 
Debugging Ruby Systems
Debugging Ruby SystemsDebugging Ruby Systems
Debugging Ruby SystemsEngine Yard
 
MySQL async message subscription platform
MySQL async message subscription platformMySQL async message subscription platform
MySQL async message subscription platformLouis liu
 
Troubleshooting PostgreSQL with pgCenter
Troubleshooting PostgreSQL with pgCenterTroubleshooting PostgreSQL with pgCenter
Troubleshooting PostgreSQL with pgCenterAlexey Lesovsky
 
High Availability With DRBD & Heartbeat
High Availability With DRBD & HeartbeatHigh Availability With DRBD & Heartbeat
High Availability With DRBD & HeartbeatChris Barber
 

Mais procurados (20)

High Availability Server with DRBD in linux
High Availability Server with DRBD in linuxHigh Availability Server with DRBD in linux
High Availability Server with DRBD in linux
 
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...
Open Source Backup Conference 2014: Workshop bareos introduction, by Philipp ...
 
Docker and friends at Linux Days 2014 in Prague
Docker and friends at Linux Days 2014 in PragueDocker and friends at Linux Days 2014 in Prague
Docker and friends at Linux Days 2014 in Prague
 
My sql fabric ha and sharding solutions
My sql fabric ha and sharding solutionsMy sql fabric ha and sharding solutions
My sql fabric ha and sharding solutions
 
Postgres-BDR with Google Cloud Platform
Postgres-BDR with Google Cloud PlatformPostgres-BDR with Google Cloud Platform
Postgres-BDR with Google Cloud Platform
 
Как понять, что происходит на сервере? / Александр Крижановский (NatSys Lab.,...
Как понять, что происходит на сервере? / Александр Крижановский (NatSys Lab.,...Как понять, что происходит на сервере? / Александр Крижановский (NatSys Lab.,...
Как понять, что происходит на сервере? / Александр Крижановский (NatSys Lab.,...
 
RHCE Training
RHCE TrainingRHCE Training
RHCE Training
 
Advanced RAC troubleshooting: Network
Advanced RAC troubleshooting: NetworkAdvanced RAC troubleshooting: Network
Advanced RAC troubleshooting: Network
 
Introduction to eBPF and XDP
Introduction to eBPF and XDPIntroduction to eBPF and XDP
Introduction to eBPF and XDP
 
HADOOP 실제 구성 사례, Multi-Node 구성
HADOOP 실제 구성 사례, Multi-Node 구성HADOOP 실제 구성 사례, Multi-Node 구성
HADOOP 실제 구성 사례, Multi-Node 구성
 
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph EnterpriseRed Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
Red Hat Enterprise Linux OpenStack Platform on Inktank Ceph Enterprise
 
Introduction to Stacki at Atlanta Meetup February 2016
Introduction to Stacki at Atlanta Meetup February 2016Introduction to Stacki at Atlanta Meetup February 2016
Introduction to Stacki at Atlanta Meetup February 2016
 
1 m+ qps on mysql galera cluster
1 m+ qps on mysql galera cluster1 m+ qps on mysql galera cluster
1 m+ qps on mysql galera cluster
 
Cpu高效编程技术
Cpu高效编程技术Cpu高效编程技术
Cpu高效编程技术
 
Debugging Ruby
Debugging RubyDebugging Ruby
Debugging Ruby
 
Мастер-класс "Логическая репликация и Avito" / Константин Евтеев, Михаил Тюр...
Мастер-класс "Логическая репликация и Avito" / Константин Евтеев,  Михаил Тюр...Мастер-класс "Логическая репликация и Avito" / Константин Евтеев,  Михаил Тюр...
Мастер-класс "Логическая репликация и Avito" / Константин Евтеев, Михаил Тюр...
 
Debugging Ruby Systems
Debugging Ruby SystemsDebugging Ruby Systems
Debugging Ruby Systems
 
MySQL async message subscription platform
MySQL async message subscription platformMySQL async message subscription platform
MySQL async message subscription platform
 
Troubleshooting PostgreSQL with pgCenter
Troubleshooting PostgreSQL with pgCenterTroubleshooting PostgreSQL with pgCenter
Troubleshooting PostgreSQL with pgCenter
 
High Availability With DRBD & Heartbeat
High Availability With DRBD & HeartbeatHigh Availability With DRBD & Heartbeat
High Availability With DRBD & Heartbeat
 

Semelhante a Oracle cluster installation with grid and nfs

PuppetConf 2016: The Challenges with Container Configuration – David Lutterko...
PuppetConf 2016: The Challenges with Container Configuration – David Lutterko...PuppetConf 2016: The Challenges with Container Configuration – David Lutterko...
PuppetConf 2016: The Challenges with Container Configuration – David Lutterko...Puppet
 
Challenges of container configuration
Challenges of container configurationChallenges of container configuration
Challenges of container configurationlutter
 
Installing oracle grid infrastructure and database 12c r1
Installing oracle grid infrastructure and database 12c r1Installing oracle grid infrastructure and database 12c r1
Installing oracle grid infrastructure and database 12c r1Voeurng Sovann
 
Setup oracle golden gate 11g replication
Setup oracle golden gate 11g replicationSetup oracle golden gate 11g replication
Setup oracle golden gate 11g replicationKanwar Batra
 
Docker container management
Docker container managementDocker container management
Docker container managementKarol Kreft
 
Oracle 11g R2 RAC setup on rhel 5.0
Oracle 11g R2 RAC setup on rhel 5.0Oracle 11g R2 RAC setup on rhel 5.0
Oracle 11g R2 RAC setup on rhel 5.0Santosh Kangane
 
Real World Experience of Running Docker in Development and Production
Real World Experience of Running Docker in Development and ProductionReal World Experience of Running Docker in Development and Production
Real World Experience of Running Docker in Development and ProductionBen Hall
 
Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1
Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1
Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1Osama Mustafa
 
RAC-Installing your First Cluster and Database
RAC-Installing your First Cluster and DatabaseRAC-Installing your First Cluster and Database
RAC-Installing your First Cluster and DatabaseNikhil Kumar
 
DUG'20: 12 - DAOS in Lenovo’s HPC Innovation Center
DUG'20: 12 - DAOS in Lenovo’s HPC Innovation CenterDUG'20: 12 - DAOS in Lenovo’s HPC Innovation Center
DUG'20: 12 - DAOS in Lenovo’s HPC Innovation CenterAndrey Kudryavtsev
 
12c: Testing audit features for Data Pump (Export & Import) and RMAN jobs
12c: Testing audit features for Data Pump (Export & Import) and RMAN jobs12c: Testing audit features for Data Pump (Export & Import) and RMAN jobs
12c: Testing audit features for Data Pump (Export & Import) and RMAN jobsMonowar Mukul
 
Installing spark 2
Installing spark 2Installing spark 2
Installing spark 2Ahmed Mekawy
 
Upgrade 11gR2 to 12cR1 Clusterware
Upgrade 11gR2 to 12cR1 ClusterwareUpgrade 11gR2 to 12cR1 Clusterware
Upgrade 11gR2 to 12cR1 ClusterwareNikhil Kumar
 
Oracle goldengate and RAC12c
Oracle goldengate and RAC12cOracle goldengate and RAC12c
Oracle goldengate and RAC12cSiraj Ahmed
 
MongoDB - Sharded Cluster Tutorial
MongoDB - Sharded Cluster TutorialMongoDB - Sharded Cluster Tutorial
MongoDB - Sharded Cluster TutorialJason Terpko
 
MongoDB – Sharded cluster tutorial - Percona Europe 2017
MongoDB – Sharded cluster tutorial - Percona Europe 2017MongoDB – Sharded cluster tutorial - Percona Europe 2017
MongoDB – Sharded cluster tutorial - Percona Europe 2017Antonios Giannopoulos
 
Data Guard on EBS R12 DB 10g
Data Guard on EBS R12 DB 10gData Guard on EBS R12 DB 10g
Data Guard on EBS R12 DB 10gIbrahim Malek
 

Semelhante a Oracle cluster installation with grid and nfs (20)

PuppetConf 2016: The Challenges with Container Configuration – David Lutterko...
PuppetConf 2016: The Challenges with Container Configuration – David Lutterko...PuppetConf 2016: The Challenges with Container Configuration – David Lutterko...
PuppetConf 2016: The Challenges with Container Configuration – David Lutterko...
 
Challenges of container configuration
Challenges of container configurationChallenges of container configuration
Challenges of container configuration
 
Installing oracle grid infrastructure and database 12c r1
Installing oracle grid infrastructure and database 12c r1Installing oracle grid infrastructure and database 12c r1
Installing oracle grid infrastructure and database 12c r1
 
Setup oracle golden gate 11g replication
Setup oracle golden gate 11g replicationSetup oracle golden gate 11g replication
Setup oracle golden gate 11g replication
 
Docker container management
Docker container managementDocker container management
Docker container management
 
Oracle 11g R2 RAC setup on rhel 5.0
Oracle 11g R2 RAC setup on rhel 5.0Oracle 11g R2 RAC setup on rhel 5.0
Oracle 11g R2 RAC setup on rhel 5.0
 
Real World Experience of Running Docker in Development and Production
Real World Experience of Running Docker in Development and ProductionReal World Experience of Running Docker in Development and Production
Real World Experience of Running Docker in Development and Production
 
Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1
Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1
Step by Step to Install oracle grid 11.2.0.3 on solaris 11.1
 
RAC-Installing your First Cluster and Database
RAC-Installing your First Cluster and DatabaseRAC-Installing your First Cluster and Database
RAC-Installing your First Cluster and Database
 
DUG'20: 12 - DAOS in Lenovo’s HPC Innovation Center
DUG'20: 12 - DAOS in Lenovo’s HPC Innovation CenterDUG'20: 12 - DAOS in Lenovo’s HPC Innovation Center
DUG'20: 12 - DAOS in Lenovo’s HPC Innovation Center
 
Linux configer
Linux configerLinux configer
Linux configer
 
12c: Testing audit features for Data Pump (Export & Import) and RMAN jobs
12c: Testing audit features for Data Pump (Export & Import) and RMAN jobs12c: Testing audit features for Data Pump (Export & Import) and RMAN jobs
12c: Testing audit features for Data Pump (Export & Import) and RMAN jobs
 
Installing spark 2
Installing spark 2Installing spark 2
Installing spark 2
 
Upgrade 11gR2 to 12cR1 Clusterware
Upgrade 11gR2 to 12cR1 ClusterwareUpgrade 11gR2 to 12cR1 Clusterware
Upgrade 11gR2 to 12cR1 Clusterware
 
Ex200
Ex200Ex200
Ex200
 
Oracle goldengate and RAC12c
Oracle goldengate and RAC12cOracle goldengate and RAC12c
Oracle goldengate and RAC12c
 
Sharded cluster tutorial
Sharded cluster tutorialSharded cluster tutorial
Sharded cluster tutorial
 
MongoDB - Sharded Cluster Tutorial
MongoDB - Sharded Cluster TutorialMongoDB - Sharded Cluster Tutorial
MongoDB - Sharded Cluster Tutorial
 
MongoDB – Sharded cluster tutorial - Percona Europe 2017
MongoDB – Sharded cluster tutorial - Percona Europe 2017MongoDB – Sharded cluster tutorial - Percona Europe 2017
MongoDB – Sharded cluster tutorial - Percona Europe 2017
 
Data Guard on EBS R12 DB 10g
Data Guard on EBS R12 DB 10gData Guard on EBS R12 DB 10g
Data Guard on EBS R12 DB 10g
 

Mais de Chanaka Lasantha

Storing, Managing, and Deploying Docker Container Images with Amazon ECR
Storing, Managing, and Deploying Docker Container Images with Amazon ECRStoring, Managing, and Deploying Docker Container Images with Amazon ECR
Storing, Managing, and Deploying Docker Container Images with Amazon ECRChanaka Lasantha
 
Building A Kubernetes App With Amazon EKS
Building A Kubernetes App With Amazon EKSBuilding A Kubernetes App With Amazon EKS
Building A Kubernetes App With Amazon EKSChanaka Lasantha
 
ERP System Implementation Kubernetes Cluster with Sticky Sessions
ERP System Implementation Kubernetes Cluster with Sticky Sessions ERP System Implementation Kubernetes Cluster with Sticky Sessions
ERP System Implementation Kubernetes Cluster with Sticky Sessions Chanaka Lasantha
 
Free radius for wpa2 enterprise with active directory integration
Free radius for wpa2 enterprise with active directory integrationFree radius for wpa2 enterprise with active directory integration
Free radius for wpa2 enterprise with active directory integrationChanaka Lasantha
 
Distributed replicated block device
Distributed replicated block deviceDistributed replicated block device
Distributed replicated block deviceChanaka Lasantha
 
Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...
Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...
Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...Chanaka Lasantha
 
Complete squid &amp; firewall configuration. plus easy mac binding
Complete squid &amp; firewall configuration. plus easy mac bindingComplete squid &amp; firewall configuration. plus easy mac binding
Complete squid &amp; firewall configuration. plus easy mac bindingChanaka Lasantha
 
Athenticated smaba server config with open vpn
Athenticated smaba server  config with open vpnAthenticated smaba server  config with open vpn
Athenticated smaba server config with open vpnChanaka Lasantha
 
Ask by linux kernel add or delete a hdd
Ask by linux kernel add or delete a hddAsk by linux kernel add or delete a hdd
Ask by linux kernel add or delete a hddChanaka Lasantha
 
Free radius billing server with practical vpn exmaple
Free radius billing server with practical vpn exmapleFree radius billing server with practical vpn exmaple
Free radius billing server with practical vpn exmapleChanaka Lasantha
 
One key sheard site to site open vpn
One key sheard site to site open vpnOne key sheard site to site open vpn
One key sheard site to site open vpnChanaka Lasantha
 
Usrt to ethernet connectivity over the wolrd cubieboard bords
Usrt to ethernet connectivity over the wolrd cubieboard bordsUsrt to ethernet connectivity over the wolrd cubieboard bords
Usrt to ethernet connectivity over the wolrd cubieboard bordsChanaka Lasantha
 
Site to-multi site open vpn solution with mysql db
Site to-multi site open vpn solution with mysql dbSite to-multi site open vpn solution with mysql db
Site to-multi site open vpn solution with mysql dbChanaka Lasantha
 
Site to-multi site open vpn solution. with active directory auth
Site to-multi site open vpn solution. with active directory authSite to-multi site open vpn solution. with active directory auth
Site to-multi site open vpn solution. with active directory authChanaka Lasantha
 
Site to-multi site open vpn solution-latest
Site to-multi site open vpn solution-latestSite to-multi site open vpn solution-latest
Site to-multi site open vpn solution-latestChanaka Lasantha
 
Install elasticsearch, logstash and kibana
Install elasticsearch, logstash and kibana Install elasticsearch, logstash and kibana
Install elasticsearch, logstash and kibana Chanaka Lasantha
 
AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)
AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)
AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)Chanaka Lasantha
 
ully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management System
ully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management Systemully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management System
ully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management SystemChanaka Lasantha
 
CYBER SECURITY WORKSHOP (Only For Educational Purpose)
CYBER SECURITY WORKSHOP (Only For Educational Purpose)CYBER SECURITY WORKSHOP (Only For Educational Purpose)
CYBER SECURITY WORKSHOP (Only For Educational Purpose)Chanaka Lasantha
 

Mais de Chanaka Lasantha (20)

Storing, Managing, and Deploying Docker Container Images with Amazon ECR
Storing, Managing, and Deploying Docker Container Images with Amazon ECRStoring, Managing, and Deploying Docker Container Images with Amazon ECR
Storing, Managing, and Deploying Docker Container Images with Amazon ECR
 
Building A Kubernetes App With Amazon EKS
Building A Kubernetes App With Amazon EKSBuilding A Kubernetes App With Amazon EKS
Building A Kubernetes App With Amazon EKS
 
ERP System Implementation Kubernetes Cluster with Sticky Sessions
ERP System Implementation Kubernetes Cluster with Sticky Sessions ERP System Implementation Kubernetes Cluster with Sticky Sessions
ERP System Implementation Kubernetes Cluster with Sticky Sessions
 
Free radius for wpa2 enterprise with active directory integration
Free radius for wpa2 enterprise with active directory integrationFree radius for wpa2 enterprise with active directory integration
Free radius for wpa2 enterprise with active directory integration
 
Distributed replicated block device
Distributed replicated block deviceDistributed replicated block device
Distributed replicated block device
 
Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...
Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...
Configuring apache, php, my sql, ftp, ssl, ip tables phpmyadmin and server mo...
 
Complete squid &amp; firewall configuration. plus easy mac binding
Complete squid &amp; firewall configuration. plus easy mac bindingComplete squid &amp; firewall configuration. plus easy mac binding
Complete squid &amp; firewall configuration. plus easy mac binding
 
Athenticated smaba server config with open vpn
Athenticated smaba server  config with open vpnAthenticated smaba server  config with open vpn
Athenticated smaba server config with open vpn
 
Ask by linux kernel add or delete a hdd
Ask by linux kernel add or delete a hddAsk by linux kernel add or delete a hdd
Ask by linux kernel add or delete a hdd
 
Free radius billing server with practical vpn exmaple
Free radius billing server with practical vpn exmapleFree radius billing server with practical vpn exmaple
Free radius billing server with practical vpn exmaple
 
One key sheard site to site open vpn
One key sheard site to site open vpnOne key sheard site to site open vpn
One key sheard site to site open vpn
 
Usrt to ethernet connectivity over the wolrd cubieboard bords
Usrt to ethernet connectivity over the wolrd cubieboard bordsUsrt to ethernet connectivity over the wolrd cubieboard bords
Usrt to ethernet connectivity over the wolrd cubieboard bords
 
Site to-multi site open vpn solution with mysql db
Site to-multi site open vpn solution with mysql dbSite to-multi site open vpn solution with mysql db
Site to-multi site open vpn solution with mysql db
 
Site to-multi site open vpn solution. with active directory auth
Site to-multi site open vpn solution. with active directory authSite to-multi site open vpn solution. with active directory auth
Site to-multi site open vpn solution. with active directory auth
 
Site to-multi site open vpn solution-latest
Site to-multi site open vpn solution-latestSite to-multi site open vpn solution-latest
Site to-multi site open vpn solution-latest
 
Install elasticsearch, logstash and kibana
Install elasticsearch, logstash and kibana Install elasticsearch, logstash and kibana
Install elasticsearch, logstash and kibana
 
AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)
AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)
AUTOMATIC JBOSS CLUSTER MANAGEMENT SYSTEM (PYTHON)
 
ully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management System
ully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management Systemully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management System
ully Automatic WSO2 Enterprise Service Bus(ESB) Cluster Management System
 
Docker framework
Docker frameworkDocker framework
Docker framework
 
CYBER SECURITY WORKSHOP (Only For Educational Purpose)
CYBER SECURITY WORKSHOP (Only For Educational Purpose)CYBER SECURITY WORKSHOP (Only For Educational Purpose)
CYBER SECURITY WORKSHOP (Only For Educational Purpose)
 

Último

Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdfChristopherTHyatt
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptxHampshireHUG
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonAnna Loughnan Colquhoun
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)wesley chun
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 

Último (20)

Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdf
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 

Oracle cluster installation with grid and nfs

  • 1. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 1 ORACLE CLUSTER INSTALLTION WITH GRID, KEEPALIVE & NFS HIGH AVAILABILITY – 12C RAC
  • 2. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 2 SETTING UP PRE-REQUIREMENTS Date: date -s "9 AUG 2013 11:32:08" SETTING UP EPEL REPOSITORY ON ALL THE SERVERS yum install http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm -y INSTALLING ORACLE ASMLIB PACKAGE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) cd /etc/yum.repos.d ; wget https://public-yum.oracle.com/public-yum-ol6.repo --no-check-certificate wget http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6 -O /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle yum install kernel-uek-devel* kernel-devel oracleasm oracleasm-support elfutils-libelf-devel kmod-oracleasm oracleasmlib tcpdump htop -y yum install oracleasmlib-2.0.12-1.el6.x86_64.rpm INSTALLING ORACLE GRID AND DATABASE PRE-REQUIREMENTS ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) yum install binutils-2.* elfutils-libelf-0.* glibc-2.* glibc-common-2.* ksh-2* libaio-0.* libgcc-4.* libstdc++-4.* make-3.* elfutils-libelf-devel-* gcc-4.* gcc-c++-4.* glibc-devel-2.* glibc-headers-2.* libstdc++-devel-4.* unixODBC-2.* compat-libstdc++-33* libaio-devel-0.* unixODBC-devel-2.* sysstat-7.* -y INSTALLING BIND PRE-REQUIREMENTS ON DNS SERVER - (192.168.0.110) yum -y install bind bind-utils INSTALLING NFS SERVER PRE-REQUIREMENTS (10.75.40.31 & 10.75.40.32) yum -y install nfs-utils
  • 3. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 3 TO OVERCOME ORA-00845: MEMORY_TARGET NOT SUPPORTED ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) SQL> startup nomount; ORA-00845: MEMORY_TARGET not supported on this system This error comes up because you tried to use the Automatic Memory Management (AMM) feature of Oracle 12C. It seems that your shared memory filesystem (shmfs) is not big enough and enlarge your shared memory filesystem to avoid the error above. First of all, login as root and have a look at the filesystem: df -hT Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_oracleem-lv_root 93G 19G 69G 22% / tmpfs 5.9G 112K 5.9G 1% /dev/shm /dev/sda1 485M 99M 362M 22% /boot We can see that tmpfs has a size of 6GB. We can change the size of that filesystem by issuing the following command (where “12g” is the size I want for my MEMORY_TARGET): mount -t tmpfs shmfs -o size=12g /dev/shm The shared memory file system should be big enough to accommodate the MEMORY_TARGET and MEMORY_MAX_TARGET values, or Oracle will throw the ORA-00845 error. Note that when changing something with the mount command, the changes are not permanent. To make the change persistent, edit your /etc/fstab file to include the option you specified above: tmpfs /dev/shm tmpfs size=12g 0 0 SQL> startup nomount ORACLE instance started. Total System Global Area 1.1758E+10 bytes Fixed Size 2239056 bytes Variable Size 5939135920 bytes Database Buffers 5804916736 bytes Redo Buffers 12128256 bytes ADDING SWAP SPACE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) dd if=/dev/zero of=/root/newswapfile bs=1M count=8198 chmod +x /root/newswapfile mkswap /root/newswapfile swapon /root/newswapfile To make the change persistent, edit your /etc/fstab file to include the option you specified above:
  • 4. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 4 vim /etc/fstab /root/newswapfile swap swap defaults 0 0 Verify: swapon -s free -k EDIT “/ETC/SYSCONFIG/NETWORK” AS ROOT USER ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) NETWORKING=yes HOSTNAME=kkcodb01 # Recommended value for NOZEROCONF NOZEROCONF=yes hostname kkcodb01 NETWORKING=yes HOSTNAME=kkcodb02 # Recommended value for NOZEROCONF NOZEROCONF=yes hostname kkcodb02 UPDATE /ETC/HOSTS FILE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) Make sure that hosts file has right entries (remove or comment out lines with ipv6), make sure there is correct IP and hostname, edit /etc/hosts as root: 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 #public 192.168.0.139 kkcodb01 kkcodb01.example.com 192.168.0.140 kkcodb02 kkcodb02.example.com #vip 192.168.0.143 kkcodb01-vip kkcodb01-vip.example.com 192.168.0.144 kkcodb02-vip kkcodb02-vip.example.com #scan vip #192.168.0.145 kkcodb-scan kkcodb-scan.example.com #192.168.0.146 kkcodb-scan kkcodb-scan.example.com #192.168.0.147 kkcodb-scan kkcodb-scan.example.com #192.168.0.148 kkcodb-scan kkcodb-scan.example.com #priv 10.75.40.143 kkcodb01-priv1 kkcodb01-priv1.example.com 10.75.40.144 kkcodb02-priv1 kkcodb02-priv2.example.com
  • 5. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 5 BOND / TEAM MULTIPLE NETWORK INTERFACES (NIC) INTO A SINGLE INTERFACE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) The Linux bonding driver provides a method for aggregating multiple network interfaces into a single logical “bonded” interface. The behavior of the bonded interfaces depends upon the mode; generally speaking, modes provide either hot standby or load balancing services. Additionally, link integrity monitoring may be performed. Modify eth0, eth1, eth3…. up to ethx config files to bond with bond0 & bond1 Create a bond0 Configuration File vim /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 BOOTPROTO=none ONBOOT=yes NETWORK=192.168.0.0 NETMASK=255.255.255.0 IPADDR=192.168.0.139 USERCTL=no PEERDNS=no BONDING_OPTS="mode=1 miimon=100" vim /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR= TYPE=Ethernet MASTER=bond0 SLAVE=yes ONBOOT=yes BOOTPROTO=none IPV6INIT=no USERCTL=no vim /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 HWADDR= TYPE=Ethernet MASTER=bond0 SLAVE=yes ONBOOT=yes BOOTPROTO=none IPV6INIT=no USERCTL=no Create a bond1 Configuration File
  • 6. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 6 vim /etc/sysconfig/network-scripts/ifcfg-bond1 DEVICE=bond1 BOOTPROTO=none ONBOOT=yes NETWORK=10.75.40.0 NETMASK=255.255.255.0 IPADDR=10.75.40.143 USERCTL=no PEERDNS=no BONDING_OPTS="mode=1 miimon=100" vim /etc/sysconfig/network-scripts/ifcfg-eth2 DEVICE=eth2 HWADDR= TYPE=Ethernet MASTER=bond1 SLAVE=yes ONBOOT=yes BOOTPROTO=none IPV6INIT=no USERCTL=no vim /etc/sysconfig/network-scripts/ifcfg-eth3 DEVICE=eth3 HWADDR= TYPE=Ethernet MASTER=bond1 SLAVE=yes ONBOOT=yes BOOTPROTO=none IPV6INIT=no USERCTL=no vim /etc/modprobe.conf alias bond0 bonding alias bond1 bonding CREATE USER AND GROUPS FOR ORACLE DATABASE AND GRID ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) groupadd -g 1000 oinstall groupadd -g 1200 dba useradd -u 1100 -g dba -G oinstall grid useradd -u 1300 -g dba -G oinstall oracle passwd grid passwd oracle
  • 7. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 7 mkdir -p /app/oracle mkdir -p /app/12.1.0/grid chown grid:dba /app chown grid:dba /app/oracle chown grid:dba /app/12.1.0 chown grid:dba /app/12.1.0/grid chmod -R 775 /app mkdir -p /u01 ; mkdir -p /u02 ; mkdir -p /u03 (Giving R/W/E permission for grid user in dba gruop) chown grid:dba /u01 chown grid:dba /u02 chown grid:dba /u03 chmod +x /u01 chmod +x /u02 chmod +x /u03 or (Giving R/W/E permission for gird/oracle - all users in dba gruop) chgrp dba /u01 chgrp dba /u02 chgrp dba /u03 chmod g+swr /u01 chmod g+swr /u02 chmod g+swr /u03 SETTING UP ENVIRONMENT VARIABLES FOR OS ACCOUNTS: GRID AND ORACLE ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) @ the kkcodb01 as the gird user su – grid vim /home/grid/.bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH # Oracle Settings TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR
  • 8. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 8 ORACLE_HOSTNAME=kkcodb01; export ORACLE_HOSTNAME ORACLE_UNQNAME=RAC; export ORACLE_UNQNAME ORACLE_BASE=/app/oracle; export ORACLE_BASE GRID_HOME=/app/12.1.0/grid; export GRID_HOME DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME ORACLE_HOME=$GRID_HOME; export ORACLE_HOME ORACLE_SID=RAC1; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM BASE_PATH=/usr/sbin:$PATH; export BASE_PATH PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH umask 022 @ the kkcodb02 as the gird user su – grid vim /home/grid/.bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH # Oracle Settings TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR ORACLE_HOSTNAME=kkcodb02; export ORACLE_HOSTNAME ORACLE_UNQNAME=RAC; export ORACLE_UNQNAME ORACLE_BASE=/app/oracle; export ORACLE_BASE GRID_HOME=/app/12.1.0/grid; export GRID_HOME DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME ORACLE_HOME=$GRID_HOME; export ORACLE_HOME ORACLE_SID=RAC2; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM BASE_PATH=/usr/sbin:$PATH; export BASE_PATH PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH umask 022
  • 9. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 9 @ the kkcodb01 as the oracle user su – oracle vim /home/grid/.bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH # Oracle Settings TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR ORACLE_HOSTNAME=kkcodb01; export ORACLE_HOSTNAME ORACLE_UNQNAME=oradb; export ORACLE_UNQNAME ORACLE_BASE=/app/oracle; export ORACLE_BASE GRID_HOME=/app/12.1.0/grid; export GRID_HOME DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME ORACLE_HOME=$DB_HOME; export ORACLE_HOME ORACLE_SID=oradb1; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM BASE_PATH=/usr/sbin:$PATH; export BASE_PATH PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH umask 022 @ the kkcodb02 as the oracle user su – oracle vim /home/grid/.bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin export PATH # Oracle Settings TMP=/tmp; export TMP TMPDIR=$TMP; export TMPDIR ORACLE_HOSTNAME=kkcodb02; export ORACLE_HOSTNAME ORACLE_UNQNAME=oradb; export ORACLE_UNQNAME
  • 10. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 10 ORACLE_BASE=/app/oracle; export ORACLE_BASE GRID_HOME=/app/12.1.0/grid; export GRID_HOME DB_HOME=$ORACLE_BASE/product/12.1.0/db_1; export DB_HOME ORACLE_HOME=$DB_HOME; export ORACLE_HOME ORACLE_SID=oradb2; export ORACLE_SID ORACLE_TERM=xterm; export ORACLE_TERM BASE_PATH=/usr/sbin:$PATH; export BASE_PATH PATH=$ORACLE_HOME/bin:$BASE_PATH; export PATH LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH umask 022 KERNEL PARAMETERS ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) MEMTOTAL=$(free -b | sed -n '2p' | awk '{print $2}') SHMMAX=$(expr $MEMTOTAL / 2) SHMMNI=4096 PAGESIZE=$(getconf PAGE_SIZE) cat >> /etc/sysctl.conf << EOF fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.shmmax = $SHMMAX kernel.shmall = `expr ( $SHMMAX / $PAGESIZE ) * ( $SHMMNI / 16 )` kernel.shmmni = $SHMMNI kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 EOF cat >> /etc/security/limits.conf <<EOF oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle hard memlock 5437300 EOF cat >> /etc/pam.d/login <<EOF session required pam_limits.so EOF
  • 11. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 11 cat >> /etc/profile <<EOF if [ $USER = "oracle" ] || [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi EOF cat >> /etc/csh.login <<EOF if ( $USER == "oracle" || $USER == "grid" ) then limit maxproc 16384 limit descriptors 65536 endif EOF Execute the shutdown -r now on both nodes DOWNLOADING ORACLE DATABASE AND GRID INFRASTRUCTURE SOFTWARE You would have to download Oracle Database 12c Release 1 Grid Infrastructure (12.1.0.2.0) for Linux x86-64 – here Download – linuxamd64_12102_grid_1of2.zip Download – linuxamd64_12102_grid_2of2.zip Downloading and installing Oracle Database software You would have to download Oracle Database 12c Release (12.1.0.2.0) for Linux x86-64 – here Download – linuxamd64_12102_database_1of2.zip Download – linuxamd64_12102_database_2of2.zip Copy zip files to kkcodb01 server to /tmp directory using WinSCP As a root user, cd /tmp chmod +x *.zip for i in /tmp/linuxamd64_12102_grid_*.zip; do unzip $i -d /home/grid/stage; done for i in /tmp/linuxamd64_12102_database_*.zip; do unzip $i -d /home/oracle/stage; done
  • 12. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 12 INSTALL BIND TO CONFIGURE DNS SERVER ON 192.168.0.138 WHICH RESOLVES DOMAIN NAME OR IP ADDRESS. yum -y install bind bind-utils Configure BIND. vim /etc/named.conf // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // acl "trusted" { 192.168.0.0/24; 10.75.40.0/24; }; options { listen-on port 53 { 127.0.0.1; 192.168.0.0/24; 10.75.40.0/24;}; #listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-transfer { any; }; allow-query { localhost; trusted; }; recursion yes; dnssec-enable yes; dnssec-validation yes; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; };
  • 13. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 13 include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; include "/etc/named/named.conf.local"; vim /etc/named/named.conf.local zone "example.com" { type master; file "/etc/named/zones/db.example.com"; # zone file path }; zone "0.192.in-addr.arpa" { type master; file "/etc/named/zones/db.192.0"; # 192.168.0.0/16 }; vim /etc/named/zones/db.example.com $TTL 604800 @ IN SOA ns1.example.com. root.example.com. ( 3 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; name servers - NS records IN NS ns1.example.com. ; name servers - A records ns1.example.com. IN A 192.168.0.139 ; A records kkcodb-scan IN A 192.168.0.145 kkcodb-scan IN A 192.168.0.146 kkcodb-scan IN A 192.168.0.147 kkcodb-scan IN A 192.168.0.148 ; kkcodb01-priv1 IN A 10.75.40.143 kkcodb02-priv1 IN A 10.75.40.144 ; kkcodb01 IN A 192.168.0.139 kkcodb02 IN A 192.168.0.140 ; nfs IN A 192.168.0.30 nfs-active IN A 10.75.40.31 nfs-pasive IN A 10.75.40.32
  • 14. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 14 vim /etc/named/zones/db.192.0 $TTL 604800 @ IN SOA ns1.example.com. root.example.com. ( 3 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; name servers - NS records IN NS ns1.example.com. ; PTR Records 139.0 IN PTR ns1.example.com. ; 192.168.0.139 ; 145.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.145 146.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.146 147.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.147 148.0 IN PTR kkcodb-scan.example.com. ; 192.168.0.148 ; 143.40 IN PTR kkcodb01-priv1.example.com. ; 10.75.40.143 144.40 IN PTR kkcodb02-priv2.example.com. ; 10.75.40.144 ; 139.0 IN PTR kkcodb01.example.com. ; 192.168.0.139 140.0 IN PTR kkcodb02.example.com. ; 192.168.0.140 ; 30.0 IN PTR nfs.example.com. ; 192.168.0.30 31.40 IN PTR nfs-active.example.com. ; 10.75.40.31 32.40 IN PTR nfs-pasive.example.com. ; 10.75.40.32 chkconfig named on service named restart named-checkzone 0.192.in-addr.arpa /etc/named/zones/db.192.0 @ ALL OF THEM ARE SHOULD BE CONFIGURING DNS CLIENT SETTING AS FOLLOWS (INCLUDING BIND SERVER ALSO), vim /etc/sysconfig/networking/profiles/default/resolv.conf nameserver 192.168.0.138 search example.com vim /etc/resolv.conf nameserver 192.168.0.138 search example.com service network restart
  • 15. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 15 chkconfig NetworkManager off service network restart cat /etc/resolv.conf ID DEVICE eth0, eth1…ethx DOES NOT SEEM TO BE PRESENT: I was able to fix the problem by deleting the /etc/udev/rules.d/70-persistant-net.rules file and restarting the virtual machine which generated a new file and got everything set up correctly. Remove .ssh directory form individual users and restart the servers. LOGICAL VOLUME MANAGEMENT – LVM ON NFS & NFS BACKUP KEEPER SERVERS LVM is a logical volume manager for the Linux kernel that manages disk drives and similar mass-storage devices. Heinz Mauelshagen wrote the original code in 1998, taking its primary design guidelines from the HP-UX's volume manager. The installers for the CrunchBang, CentOS, Debian, Fedora, Gentoo, Mandriva, MontaVista Linux, openSUSE, Pardus, Red Hat Enterprise Linux, Slackware, SLED, SLES, Linux Mint, Kali Linux, and Ubuntu distributions are LVM-aware and can install a bootable system with a root filesystem on a logical volume. LVM IS COMMONLY USED FOR THE FOLLOWING PURPOSES: 1. Managing large hard disk farms by allowing disks to be added and replaced without downtime or service disruption, in combination with hot swapping. 2. On small systems (like a desktop at home), instead of having to estimate at installation time how big a partition might need to be in the future, LVM allows file systems to be easily resized later as needed. 3. Performing consistent backups by taking snapshots of the logical volumes. 4. Creating single logical volumes of multiple physical volumes or entire hard disks (somewhat similar to RAID 0, but more similar to JBOD), allowing for dynamic volume resizing. 5. the Ganeti solution stack relies on the Linux Logical Volume Manager 6. LVM can be considered as a thin software layer on top of the hard disks and partitions, which creates an abstraction of continuity and ease-of-use for managing hard drive replacement, re-partitioning, and backup. THE LVM CAN: 1. Resize volume groups online by absorbing new physical volumes (PV) or ejecting existing ones. 2. Resize logical volumes (LV) online by concatenating extents onto them or truncating extents from them. 3. Create read-only snapshots of logical volumes (LVM1). 4. Create read-write snapshots of logical volumes (LVM2). 5. Create RAID logical volumes (available in newer LVM implementations): RAID 1, RAID 5, RAID 6, etc. 6. Stripe whole or parts of logical volumes across multiple PVs, in a fashion similar to RAID 0. 7. Configure a RAID 1 backend device (a PV) as write-mostly, resulting in reads being avoided to such devices. 8. Allocate thin-provisioned logical volumes from a pool. 9. Move online logical volumes between PVs. Split or merge volume groups in situ (as long as no logical volumes span the split). This can be useful when migrating whole logical volumes to or from offline storage. 10. Create hybrid volumes by using the dm-cache target, which allows one or more fast storage devices, such as flash-based solid-state drives (SSDs), to act as a cache for one or more slower hard disk drives (HDDs).
  • 16. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 16 CREATE A PHYSICAL VOLUME Input Command pvcreate -ff /dev/sdb Output Physical volume "/dev/sdb" successfully created DISPLAY A STATUS OF PHYSICAL VOLUMES Input Command pvdisplay /dev/sdb Output "/dev/sdb" is a new physical volume of "150.00 GiB" --- NEW Physical volume --- PV Name /dev/sdb VG Name PV Size 150.00 GiB CREATE A VOLUME GROUP Input Command vgcreate volg1 /dev/sdb Output Volume group "volg1" successfully created DISPLAY VOLUME GROUPS Input Command vgdisplay Output --- Volume group --- VG Name volg1 System ID Format l vm2
  • 17. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 17 VG Access read/write VG Status resizable VG Size 150.00 GiB CREATE A LOGICAL VOLUME Input Command lvcreate -L 149G -n lv_data volg1 NOTE: create a Logical Volumes 'lv_data' as 150G in volume group 'vg_data' Output Logical volume "lv_data" created DISPLAY STATUS OF LOGICAL VOLUMES Input Command lvdisplay Output --- Logical volume --- LV Path /dev/volg1/lv_data LV Name lv_data VG Name vg_data LV Write Access read/write LV Status available FORMATTING LOGICAL VOLUME BEFORE MOUNT IT. Input Command mkfs.ext4 /dev/volg1/lv_data
  • 18. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 18 MOUNTING LOGICAL VOLUME INTO A SPECIFIC USER’S FOLDER Input Command mkdir -p /u01/VM/nfs_shares mount /dev/volg1/lv_data /u01/VM/nfs_shares vim /etc/fstab /dev/volg1/lv_data /u01/VM/nfs_shares ext4 defaults 0 0 CONFIGURING LSYNCD BACKUP SERVER AS A BACKUP KEEPER FOR THE NFS SERVER (CONFIGURE A SEPARATE NETWORK ADDRESS FOR THE BACKUP REPLICATION) Lsyncd is a tool I discovered a few weeks ago, it is a synchronization server based primarily on Rsync. It is a server daemon that runs on the “master” server, and it can sync / mirror any file or directory changes within seconds into your “slaves” servers, you can have as many slave servers as you want. Lsyncd is constantly watching a local directory and monitoring file system changes using inotify / fsevents. By default, lsyncd uses rsync to send the data over the slave machines, however there are other ways to do it. It does not require you to build new filesystems or block devices, and does not harm your server I/O performance. yum -y install lua lua-devel pkgconfig gcc asciidoc lsyncd rsync @ NFS SERVER (10.75.40.30/192.168.0.30) vim /etc/lsyncd.conf settings={ logfile="/var/log/lsyncd.log", statusFile="/tmp/lsyncd.stat", statusInterval=1, } sync{ default.rsync, source="/u01/VM/nfs_shares", target="192.168.0.31:/u01/VM/nfs_shares", rsync={rsh="/usr/bin/ssh -l root -i /root/.ssh/id_rsa",} } rsync = { compress = true, acls = true, verbose = true,
  • 19. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 19 owner = true, group = true, perms = true, rsh = "/usr/bin/ssh -p 22 -o StrictHostKeyChecking=no" service lsyncd start chkconfig lsyncd on mkdir -p /var/log/lsyncd GENERATE THE SSH PUBLIC KEYS BETWEEN NFS & NFS BACKUP KEEPER SERVERS #!/bin/sh echo "Are the both of the server’s reachable conditions satisfied? (y/n)" read sslkeygen case $sslkeygen in y) echo "Please enter the IP address of the Source Linux Server node." read ipaddr1 echo "Please enter the IP address of the Destination Linux Server." read ipaddr2 echo "" echo "Generating SSH key..." ssh-keygen -t rsa echo "" echo "Copying SSH key to the Destination Linux Server..." echo "Please enter the root password for the Remote Linux Server." ssh root@$ipaddr2 mkdir -p .ssh cat /root/.ssh/id_rsa.pub | ssh root@$ipaddr2 'cat >> .ssh/authorized_keys' ssh root@$ipaddr2 "chmod 700 .ssh; chmod 640 .ssh/authorized_keys" echo "" echo "SSH Key Authentication successfully set up ... continuing Next Linux RSA Key installation form Remote server to Source Server..." echo "Generating SSH key on Destination Server..." ssh root@$ipaddr2 ssh-keygen -t rsa echo "" echo "Copying SSH key to the Destination Linux Server..." echo "Please enter the root password for the Remote Linux Server." mkdir -p .ssh ssh root@$ipaddr2 cat /root/.ssh/id_rsa.pub | ssh root@$ipaddr1 'cat >> .ssh/authorized_keys' chmod 700 .ssh; chmod 640 .ssh/authorized_keys echo "" ;; n) echo "Root access must be enabled on the second machine...exiting!" exit 1 ;; *) echo "Unknown choice ... exiting!" exit 2 esac
  • 20. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 20 CONFIGURING NFS SERVER (10.75.40.30 / 192.168.0.30) groupadd -g 1000 oinstall groupadd -g 1200 dba useradd -u 1100 -g dba -G oinstall grid useradd -u 1300 -g dba -G oinstall oracle passwd grid passwd oracle mkdir -p /u01/VM/nfs_shares/shared_1 mkdir -p /u01/VM/nfs_shares/shared_2 mkdir -p /u01/VM/nfs_shares/shared_3 chown grid:dba /u01/VM/nfs_shares/shared_1 chown grid:dba /u01/VM/nfs_shares/shared_2 chown grid:dba /u01/VM/nfs_shares/shared_3 chmod +x /u01/VM/nfs_shares/shared_1 chmod +x /u01/VM/nfs_shares/shared_2 chmod +x /u01/VM/nfs_shares/shared_3 vim /etc/exports OPTIMISED BY 50% (ASYNC): /u01/VM/nfs_shares/shared_1 *(rw,async,no_wdelay,insecure_locks,no_root_squash) /u01/VM/nfs_shares/shared_2 *(rw,async,no_wdelay,insecure_locks,no_root_squash) /u01/VM/nfs_shares/shared_3 *(rw,async,no_wdelay,insecure_locks,no_root_squash) chkconfig nfs on service nfs restart showmount -e 10.75.40.30 NFS MASTER RESULT: /u01/VM/nfs_shares/shared_3 * /u01/VM/nfs_shares/shared_2 * /u01/VM/nfs_shares/shared_1 * showmount -e 10.75.40.32 NFS SLAVE RESULT: /u01/VM/nfs_shares/shared_3 * /u01/VM/nfs_shares/shared_2 * /u01/VM/nfs_shares/shared_1 * showmount -e 10.75.40.30 NFS VIRTUAL IP RESULT: /u01/VM/nfs_shares/shared_3 * /u01/VM/nfs_shares/shared_2 * /u01/VM/nfs_shares/shared_1 *
  • 21. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 21 CONFIGURING NFS MOUNT POINTS ON BOTH NODES/RACS - (192.168.0.139 & 192.168.0.140) vim /etc/fstab 10.75.40.30:/u01/VM/nfs_shares/shared_1 /u01 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0 10.75.40.30:/u01/VM/nfs_shares/shared_2 /u02 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0 10.75.40.30:/u01/VM/nfs_shares/shared_3 /u03 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0 mount /u01 mount /u02 mount /u03 df -hT INSTALLING ORACLE GRID ON BOTH NODES FROM 192.168.0.139
  • 22. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 22
  • 23. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 23
  • 24. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 24
  • 25. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 25
  • 26. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 26
  • 27. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 27
  • 28. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 28
  • 29. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 29
  • 30. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 30
  • 31. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 31
  • 32. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 32 INSTALLING ORACLE DATABASE ON BOTH NODES FROM 192.168.0.139
  • 33. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 33
  • 34. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 34
  • 35. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 35
  • 36. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 36
  • 37. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 37
  • 38. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 38
  • 39. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 39
  • 40. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 40
  • 41. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 41 ADMINISTRATING THE GRID INSFRASTUCTURE ON BOTH OF RAC NODES (as a grid user) su - grid crsctl status resource -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.LISTENER.lsnr ONLINE ONLINE kkcodb01 STABLE ONLINE ONLINE kkcodb02 STABLE ora.asm OFFLINE OFFLINE kkcodb01 Instance Shutdown,ST ABLE OFFLINE OFFLINE kkcodb02 STABLE ora.net1.network ONLINE ONLINE kkcodb01 STABLE ONLINE ONLINE kkcodb02 STABLE ora.ons ONLINE ONLINE kkcodb01 STABLE ONLINE ONLINE kkcodb02 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE kkcodb01 STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE kkcodb01 STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE kkcodb01 STABLE ora.LISTENER_SCAN4.lsnr 1 ONLINE ONLINE kkcodb01 STABLE ora.MGMTLSNR 1 ONLINE ONLINE kkcodb01 169.254.225.48 10.75 .40.143,STABLE ora.cvu 1 ONLINE ONLINE kkcodb01 STABLE ora.kkcodb01.vip 1 ONLINE ONLINE kkcodb01 STABLE ora.kkcodb02.vip 1 ONLINE ONLINE kkcodb02 STABLE ora.mgmtdb 1 ONLINE ONLINE kkcodb01 Open,STABLE ora.oc4j
  • 42. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 42 1 ONLINE ONLINE kkcodb01 STABLE ora.oradb.db 1 ONLINE ONLINE kkcodb01 Open,STABLE 2 ONLINE ONLINE kkcodb02 Open,STABLE ora.scan1.vip 1 ONLINE ONLINE kkcodb01 STABLE ora.scan2.vip 1 ONLINE ONLINE kkcodb01 STABLE ora.scan3.vip 1 ONLINE ONLINE kkcodb01 STABLE ora.scan4.vip 1 ONLINE ONLINE kkcodb01 STABLE srvctl status instance -db oradb -node kkcodb01 srvctl status instance -db oradb -node kkcodb02 Instance oradb1 is running on node kkcodb01 Instance oradb2 is running on node kkcodb02 srvctl status instance -d oradb -i oradb1 srvctl status instance -d oradb -i oradb2 Instance oradb1 is running on node kkcodb01 Instance oradb2 is running on node kkcodb02 SUMMARY OF THE MOST IMPORTANT COMMANDS TO RAISE / STOP / CHECK CLUSTER RESOURCES crsctl check crs crsctl check cluster -n kkcodb01 crsctl check ctss crsctl config crs (requiere root) cat /etc/oracle/scls_scr/rac1/root/ohasdstr crsctl stat res -t crsctl stat res ora.rac.db -p crsctl stat res ora.rac.db -f crsctl query css votedisk olsnodes -n -i -s -t oifcfg getif ocrcheck ocrcheck -local (requiere root) ocrconfig -showbackup ocrconfig -add +TEST cluvfy comp crs -n rac1 srvctl status database -d oradb srvctl status instance -d oradb -i kkcodb01 srvctl status service -d oradb srvctl status nodeapps
  • 43. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 43 srvctl status vip -n kkcodb01 srvctl status listener -l LISTENER srvctl status asm -n kkcodb01 srvctl status scan srvctl status scan_listener srvctl status server -n kkcodb01 srvctl status diskgroup -g DGRAC srvctl config database -d oradb srvctl config service -d oradb srvctl config nodeapps srvctl config vip -n kkcodb01 srvctl config asm -a srvctl config listener -l LISTENER srvctl config scan srvctl config scan_listener crsctl stop cluster crsctl start cluster crsctl stop crs crsctl start crs crsctl disable crsctl disable srvctl stop database -d oradb -o immediate srvctl start database -d oradb srvctl stop instance -d oradb -i kkcodb01 -o immediate srvctl start instance -d oradb -i kkcodb01 srvctl stop service -d oradb -s OLTP -n kkcodb01 srvctl sart service -d oradb -s OLTP srvctl stop nodeapps -n kkcodb01 srvctl start nodeapps srvctl stop vip -n rac1 srvctl start vip -n rac1 srvctl stop asm -n rac1 -o abort -f srvctl start asm -n rac1 srvctl stop listener -l LISTENER srvctl start listener -l LISTENER srvctl stop scan -i 1 srvctl start scan -i 1 srvctl stop scan_listener -i 1 srvctl start scan_listener -i 1 srvctl stop diskgroup -g TEST -n kkcodb01, kkcodb02 srvctl start diskgroup -g TEST -n kkcodb01, kkcodb02 srvctl relocate service -d RAC -s OLTP -i kkcodb01 -t kkcodb02 srvctl relocate scan_listener -i 1 rac1
  • 44. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 44 DELETING THE GRID INSFRASTUCTURE ON BOTH OF RAC NODES Login as grid user: Removin GRID $ORACLE_HOME/deinstall/deinstall Removing Brocken Grid Installation: On all cluster nodes except the last, run the following command as the "root" user. perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force perl $GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force Cleanup steps for cluster node removal for following scenarios: rm -rf /app/* #rm -rf /u01/* ; rm -rf /u02/* ; rm -rf /u03/* rm -rf /etc/oracle/* rm -rf /etc/oraInst.loc rm -rf /etc/oratab rm -rf /var/tmp/.oracle/* rm -rf /home/oracle/.ssh/ rm -rf /home/grid/.ssh/ shutdown -r now CONNECTING WITH ORACLE SQL DEVELOPER
  • 45. ORACLE CLUSTER INSTALLATION WITH GRID & NFS HIGH AVAILABILITY-12C November 24, 2016 GRID RAC Page 45