2. About me
Sangam 15 Anju Garg 2
• Oracle Ace Associate
• More than 13 years of experience in IT Industry
• Independent Corporate Trainer (Oracle DBA), Author, Speaker
• Oracle blog : http://oracleinaction.com/
• Email : anjugarg66@gmail.com
• Oracle Certified Expert
3. Agenda
• ACFS Configurations
• Pre-12c ACFS Cluster Configuration With NFS: Limitations
• ACFS Cluster Configuration With Highly Available NFS (HANFS)
• HANFS Component Resources
• Illustrations
I. HANFS Configuration
II. HAVIP With Multiple associated Export FS resources
III. Execution of HAVIP on a node with largest number of file systems available
IV. Load balancing of HAVIPs
V. Migration of HAVIPs and Exports across the Cluster
VI. Controlling the location of Exports in the Cluster
• Guidelines for HANFS configuration
• Potential use cases
• Conclusion
• References
• Q & A
Sangam 15 Anju Garg 3
4. ACFS Configurations
• Oracle ACFS supports following configurations:
– Oracle Restart (i.e. single instance non-clustered) configuration
– Oracle Grid Infrastructure clustered configurations
• Data stored on an ACFS file system can also be accessed by NAS file access
protocols such as NFS (Network File System) or Microsoft's CIFS (Common
Internet File System).
Sangam 15 Anju Garg 4
5. Some examples of supported ACFS
configurations
Sangam 15 Anju Garg 5
6. Sangam 15 Anju Garg 6
Files stored on ACFS (non-shared storage) are available to a single server.
Non-Cluster Configuration
7. Sangam 15 Anju Garg 7
Non-Cluster Configuration With Network File Servers
Files stored on ACFS (non-shared storage) are exported to clients over the
network over NFS (Network File System) or Microsoft's CIFS (Common Internet
File System).
8. Sangam 15 Anju Garg
8
ACFS Clustered Configuration
Files stored on ACFS which are located on shared storage can be directly
accessed from any of the nodes in the cluster.
9. Sangam 15 Anju Garg
9
ACFS Clustered Configuration With NFS/CIFS
Network File Servers
Files stored on ACFS (i.e. shared storage) can be accessed from any of the
nodes in the cluster as well as by server(s) outside the cluster over NFS
(Network File System) or Microsoft's CIFS (Common Internet File System).
10. Sangam 15 Anju Garg
10
Pre-12c ACFS Cluster Configuration With NFS
Limitations
NFS-exported paths were not highly available - If the node hosting the export
crashed, the NFS service was interrupted and the NFS client could not access
the ACFS file system any longer.
11. ACFS Cluster Configuration With Highly Available NFS
(HANFS)
• With Oracle Database 12c R1, Cloud FS includes Highly Available NFS
(HANFS) over ACFS which enables highly available NFS servers to be
configured using Oracle ACFS clusters.
• The NFS exports are exposed through Highly Available VIPs (HAVIPs), and
this allows Oracle’s Clusterware agents to ensure that HAVIPs and NFS
exports are always available.
Sangam 15 Anju Garg 11
12. ACFS Cluster Configuration With HANFS
If the node hosting the export(s) fails, the corresponding HAVIP and hence its
corresponding NFS export(s) will automatically fail over to one of the surviving
nodes so that the NFS client continues to receive uninterrupted service of NFS
exported paths.
Sangam 15 Anju Garg 12
13. HANFS Over ACFS: Features
• HANFS requires NFS in order to run and relies on the base operating
system to provide all NFS-related functionality (such as the NFS server and
NFS client utilities).
• NFS needs to be running on each node that can host the HANFS services.
• Oracle ACFS HANFS 12.1 works in conjunction with NFS v2 and v3 over
IPV4.
• While base NFS supports file locking, HANFS does not support NFS file
locking.
• HANFS is not supported in Oracle Restart configurations.
• HANFS is not supported with non-Oracle ACFS file systems.
• In the case of Flex Cluster,
– Only hub nodes can host the HANFS services because hub nodes have
direct access to storage.
– Since ADVM/ACFS utilize an ADVM proxy instance to connect to Flex
ASM, an ADVM proxy instance must be running on each hub node that
can host HANFS services.
Sangam 15 Anju Garg 13
14. HANFS
Supported Platforms and Operating Systems
Sangam 15 Anju Garg 14
Platform /
Operating
System
Versions Supported
AIX AIX v6.1 or later
Solaris Solaris 11 GA or later, X64 and Sparc64
Linux Red Hat Enterprise Linux v5.3 and later or v6.0 and later
(requires nfs-utils-1.0.9-60 or later)
Oracle Enterprise Linux v5.3 and later or v6.0 and later,
with the Unbreakable Enterprise Kernel or the Red Hat
Compatible Kernel (requires nfs-utils-1.0.9-60 or later)
SUSE Enterprise Server v11 or later (requires nfs-kernel-
server-1.2.1-2.24.1 or later)
Platform /
Operating System Versions Supported
15. HANFS Component Resources
• In addition to ACFS/ADVM and ASM, HANFS also relies on the following
Oracle Database 12c R1 Clusterware resources:
– File System - An ACFS file system is an Oracle Clusterware resource
that you want to expose through HANFS.
– Highly Available VIP (HAVIP) - A special class of the standard Oracle
node VIP which manages a unique IP address in the cluster on a single
node at any time. Will be relocated to an active node in the cluster as
necessary with the objective of providing uninterrupted service of NFS
exported paths to its client(s).
– Export File System (Export FS) - A cluster resource which publishes the
designated ACFS file system to client(s) using HANFS. An Export FS
resource is associated with a HAVIP; together they provide
uninterrupted service of NFS exported paths to the cluster’s client(s).
Sangam 15 Anju Garg 15
17. Overview
• To illustrate how to configure and leverage HANFS, we will utilize an ASM
Flex Cluster that is configured with two hub nodes (host01 and host02).
• After checking the prerequisites, we will create an ACFS file system
resource, an Export FS resource, and a HAVIP resource.
• Finally, we will verify that the HANFS Exported File System can be mounted
by the client.
Sangam 15 Anju Garg 17
18. Check Prerequisites
• Verify that all kernel modules needed for ACFS and ADVM are loaded on
all nodes
Sangam 15 Anju Garg 18
[root@host01 ~]# lsmod |grep oracle
oracleacfs 2837904 1
oracleadvm 342512 1
oracleoks 409560 2 oracleacfs,oracleadvm
oracleasm 84136 1
[root@host02 ~]# lsmod |grep oracle
oracleacfs 2837904 1
oracleadvm 342512 1
oracleoks 409560 2 oracleacfs,oracleadvm
oracleasm 84136 1
• Verify that ASM proxy instance is running on all nodes.
[root@host02 ~]# crsctl stat res ora.proxy_advm -t
----------------------------------------------------------------
Name Target State Server State details
----------------------------------------------------------------
Local Resources
----------------------------------------------------------------
ora.proxy_advm
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
19. • Verify that the NFS service is running on both nodes
Sangam 15 Anju Garg 19
[root@host01 ~]# service nfs status
rpc.mountd (pid 4921) is running...
nfsd (pid 4910 4909 4908 4907 4906 4905 4904 4903) is running...
rpc.rquotad (pid 4882) is running...
[root@host02 ~]# service nfs status
rpc.mountd (pid 4985) is running...
nfsd (pid 4982 4981 4980 4979 4978 4977 4976 4975) is running...
rpc.rquotad (pid 4941) is running...
Check Prerequisites
23. Configure / Start Cloud File System Resource
For ACFS File System
• Create a Cloud File System resource using the volume device VOL1 along
with the mount point. The ACFS file system remains unmounted until the file
system resource is actually started.
• Start the Cloud File System Resource so that , the file system is successfully
mounted on both the nodes.
Sangam 15 Anju Garg 23
[root@host01 ~]# srvctl add filesystem -m /mnt/acfsmounts/acfs1 -d /dev/asm/vol1-106
[root@host01 ~]# srvctl status filesystem -d /dev/asm/vol1-106
ACFS file system /mnt/acfsmounts/acfs2 is not mounted
[root@host01 ~]# mount | grep vol1
[root@host02 ~]# mount | grep vol1
[root@host01 ~]# srvctl start filesystem -d /dev/asm/vol1-106
[root@host01 ~]# srvctl status filesystem -d /dev/asm/vol1-106
ACFS file system /mnt/acfsmounts/acfs2 is mounted on nodes host01,host02
[root@host01 ~]# mount | grep vol1
/dev/asm/vol1-106 on /mnt/acfsmounts/acfs1 type acfs (rw)
[root@host02 ~]# mount | grep vol1
/dev/asm/vol1-106 on /mnt/acfsmounts/acfs1 type acfs (rw)
24. Configure / Start Cloud File System Resource
For ACFS File System
Sangam 15 Anju Garg 24
25. Verification of Cloud File System Resource
• To confirm that the new Cloud File System is accessible from each node,
create a small text file inside it from host01 and verify that the file can be
accessed from host02.
• At this point we have created and tested a new Cloud File System Resource.
• Now we will create HAVIP and Export FS resources and make this file system
available to a client server using HANFS.
Sangam 15 Anju Garg 25
[root@host01 ~]# echo “Test File on ACFS” > /mnt/acfsmounts/acfs1/testfile.txt
[root@host02 asm]# cat /mnt/acfsmounts/acfs1/testfile.txt
Test File on ACFS
26. Configure a HAVIP Resource
• Create a HAVIP resource called havip1 on a non-pingable, non-DHCP IP
address (e.g. 192.9.201.184).
• If we try to start the newly created HAVIP resource havip1, it fails because a
HAVIP resource has a hard start dependency on resource type
ora.havip1.export.type and hence requires at least one Export FS configured
and associated with it.
Sangam 15 Anju Garg 26
[root@host01 ~]# srvctl add havip -address 192.9.201.184 -id havip1
[root@host01 ~]# srvctl status havip -id havip1
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is not running
[root@host01 ~]# srvctl start havip -id havip1
PRCR-1079 : Failed to start resource ora.havip1.havip
CRS-2805: Unable to start 'ora.havip1.havip' because it has a 'hard' dependency on
resource type 'ora.havip1.export.type' and no resource of that type can satisfy the
dependency
[root@host01 ~]# crsctl stat res ora.havip1.havip -f | grep DEPENDENCIES
START_DEPENDENCIES=hard(ora.net1.network,uniform:type:ora.havip1.export.type) ...
28. Configure and Start an Export File System
Resource
• Create an Export FS cluster resource called export1 associated with HAVIP
havip1. This Export FS resource publishes the file system having specified
mount point using HANFS.
• Start the Export FS resource export1 - Gets started on host02 i.e. host02 will
export the file system to clients.
Sangam 15 Anju Garg 28
[root@host01 ~]# srvctl add exportfs -id havip1 -path /mnt/acfsmounts/acfs1 -name
export1 -options rw -clients *.example.com
[root@host01 ~]# srvctl status exportfs -name export1
export file system export1 is enabled
export file system export1 is not exported
[root@host01 ~]# srvctl start exportfs -name export1
[root@host01 ~]# srvctl status exportfs -name export1
export file system export1 is enabled
export file system export1 is exported on node host02
29. Verify Status of HAVIP Resource
• Confirm that HAVIP resource havip1 is automatically started on the same
host (host02) as Export FS resource export1 because the HAVIP havip1
resource has a pull up start dependency on resource type
ora.havip1.export.type .
Sangam 15 Anju Garg 29
[root@host01 ~]# srvctl status havip
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is running on nodes host02
[root@host02 asm]# ifconfig |grep 184
inet addr:192.9.201.184 Bcast:192.9.201.255 Mask:255.255.255.0
[root@host01 ~]# crsctl stat res ora.havip1.havip -f | grep START_DEPENDENCIES
START_DEPENDENCIES=hard(ora.net1.network,uniform:type:ora.havip1.export.type)
weak(global:ora.gns) attraction(ora.data.vol1.acfs)
dispersion:active(type:ora.havip.type) pullup(ora.net1.network)
pullup:always(type:ora.havip1.export.type)
31. Mount the HANFS Exported File System
• To verify that the HANFS Exported File System can be mounted by the client,
we will create a mount point on a HANFS client server named server1 and
attempt to mount the HANFS exported file system.
• To confirm the success of the HANFS mount, we will simply verify the
contents of the text file created earlier.
Sangam 15 Anju Garg 31
[root@server1 ~]# mkdir -p /mnt/hanfs1
[root@server1 ~]# mount -t nfs 192.9.201.184:/mnt/acfsmounts/acfs1 /mnt/hanfs1
[root@server1 ~]# mount |grep hanfs1
192.9.201.184:/mnt/acfsmounts/acfs1 on /mnt/hanfs1 type nfs (rw,addr=192.9.201.184)
[root@server1 ~]# cat /mnt/hanfs1/testfile.txt
Test File on ACFS
33. Summary
Sangam 15 Anju Garg 33
• In an ASM Flex Cluster that is configured with two hub nodes (host01 and
host02), we
– Checked the prerequisites
– Created an ACFS file system resource, an Export FS resource, and a
HAVIP resource.
– Verified that the HANFS Exported File System can be mounted by the
client.
35. Overview
• A given set of HAVIP and associated Export FS resources is managed as a
group by Oracle Clusterware
• A HAVIP and multiple Export FS resources associated with it will always
execute on the same node.
• To illustrate this functionality, we will
– Create and start another Export FS resource associated with the HAVIP
havip1 – Newly created export will execute on the same node as havip1
– Relocate the HAVIP havip1 : Clusterware will relocate both the
associated exports to the same node as havip1
– Stop the File System Resource corresponding to one Export FS resource
on one node: Associated export along with havip1 and the second
export will be migrated to the other node by the clusterware
• The HAVIP havip1 and the two export FS resources associated with will
always execute on the same node.
Sangam 15 Anju Garg 35
36. Find current status of HANFS resources
• Find out the current status of existing HANFS resources
– ACFS file system resource on volume device VOL1 is mounted on both
the nodes .
– Export FS resource export1 and associated HAVIP resource havip1 are
currently executing on node host02.
Sangam 15 Anju Garg 36
[root@host01 ~]# srvctl status filesystem -d /dev/asm/vol1-106
ACFS file system /mnt/acfsmounts/acfs1 is mounted on nodes host01,host02
[root@host01 ~]# srvctl status exportfs -name export1
export file system export1 is enabled
export file system export1 is exported on node host02
[root@host01 ~]# srvctl status havip
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is running on nodes host02
38. Create and start another Cloud File System
resource
• Create and start another Cloud File System resource which we want to
publish using HAVIP havip1. This File System resource has the mount point
/mnt/acfsmounts/acfs2 and uses the volume device VOL2.
Sangam 15 Anju Garg 38
[root@host01 ~]# srvctl add filesystem -m /mnt/acfsmounts/acfs2 -d /dev/asm/vol2-106
[root@host01 ~]# srvctl status filesystem -d /dev/asm/vol2-106
ACFS file system /mnt/acfsmounts/acfs2 is not mounted
[root@host01 ~]# srvctl start filesystem -d /dev/asm/vol2-106
[root@host01 ~]# srvctl status filesystem -d /dev/asm/vol2-106
ACFS file system /mnt/acfsmounts/acfs2 is mounted on nodes host01,host02
39. Create a second Export FS resource export2
associated with HAVIP havip1
• Create an Export FS cluster resource called export2 which will publish the
newly created file system using HANFS. This Export FS resource has also
been associated with HAVIP havip1.
• Now, there are two Export FS resources - export1 and export2 associated
with HAVIP havip1.
• While resources export1 and havip1 are currently executing on host02,
resource export2 is not running.
Sangam 15 Anju Garg 39
[root@host01 ~]# srvctl add exportfs -id havip1 -path /mnt/acfsmounts/acfs2 -name
export2 -options rw -clients *.example.com
[root@host01 ~]# srvctl status exportfs -name export2
export file system export2 is enabled
export file system export2 is not exported
40. Start the second Export FS resource associated with
havip1
• Start the second Export FS resource - export2 associated with havip1 -
Since it has attraction start dependency on resource havip1 , it executes
on the same node (host02) as havip1.
Sangam 15 Anju Garg 40
[root@host01 acfs1]# crsctl stat res ora.export2.export -f | grep START_DEPENDENCIES
START_DEPENDENCIES=hard(ora.data.vol2.acfs) attraction(ora.havip1.havip)
pullup(ora.data.vol2.acfs)
[root@host01 ~]# srvctl start exportfs -name export2
[root@host01 ~]# srvctl status exportfs
export file system export1 is enabled
export file system export1 is exported on node host02
export file system export2 is enabled
export file system export2 is exported on node host02
[root@host01 ~]# srvctl status havip -id havip1
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is running on nodes host02
41. Second Export FS resource executes on the same
node (host02) as havip1
Sangam 15 Anju Garg 41
42. Relocate HAVIP havip1
• Relocate HAVIP havip1 from its current node (host02) to host01.
• Export FS resources (export1 and export2 ) associated with it have
attraction start dependency on it.
• Clusterware relocates both the associated exports to the same node as
havip1 (host01) .
• All the three resources execute on the same node once again.
Sangam 15 Anju Garg 42
[root@host01 acfs1]# srvctl relocate havip -id havip1 -f
HAVIP was relocated successfully
[root@host01 acfs1]# srvctl status havip -id havip1
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is running on nodes host01
[root@host01 acfs1]# srvctl status exportfs -id havip1
export file system export1 is enabled
export file system export1 is exported on node host01
export file system export2 is enabled
export file system export2 is exported on node host01
43. Exports associated with havip1 relocated to
the same node (host01) as havip1
Sangam 15 Anju Garg 43
44. Stop a Cluster File System Resource
• Stop the File System resource associated with volume VOL1 on host01
where all the HANFS resources are currently running so that it is now left
mounted only on host02.
• The second File System resource is still available on both the nodes.
Sangam 15 Anju Garg 44
[root@host01 ~]# srvctl status filesystem -d /dev/asm/vol1-106
ACFS file system /mnt/acfsmounts/acfs1 is mounted on nodes host01,host02
[root@host01 acfs1]# srvctl stop filesystem -d /dev/asm/vol1-106 -n host01 –f
[root@host01 ~]# srvctl status filesystem -d /dev/asm/vol1-106
ACFS file system /mnt/acfsmounts/acfs1 is mounted on nodes host02
[root@host01 ~]# srvctl status filesystem -d /dev/asm/vol2-106
ACFS file system /mnt/acfsmounts/acfs2 is mounted on nodes host01,host02
45. Automatic relocation of export1, export2
and havip1
• As export1 has hard stop dependency on vol1.acfs , export1 stops on
host01. vol1.acfs which is still executing on host02 causes export1 to
start on host02.
• As a consequence, the associated HAVIP havip1 and the second export2
are also migrated to host02 by the clusterware .
• All the three resources execute on the same node once again
Sangam 15 Anju Garg 45
[root@host01 ~]# crsctl stat res ora.export1.export -f | grep DEPENDENCIES
START_DEPENDENCIES=hard(ora.data.vol1.acfs) attraction(ora.havip1.havip)
pullup(ora.data.vol1.acfs)
STOP_DEPENDENCIES=hard(intermediate:ora.data.vol1.acfs)
[root@host01 ~]# srvctl status havip -id havip1
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is running on nodes host02
[root@host01 ~]# srvctl status exportfs -id havip1
export file system export1 is enabled
export file system export1 is exported on node host02
export file system export2 is enabled
export file system export2 is exported on node host02
47. Summary
• As a result of mutual dependencies among various HANFS resources, a given
set of associated HAVIP and Export FS resources is managed as a group by
Oracle Clusterware so that a HAVIP and multiple Export FS resources
associated with it always execute on the same node.
Sangam 15 Anju Garg 47
48. Execution of HAVIP on a node where the
largest number of file systems are
available
Illustration-III
49. Overview
• When multiple file systems are exported by a HAVIP resource, that HAVIP
will run on a node in the cluster where the largest number of ACFS file
systems identified with that HAVIP are currently mounted.
• We will create another (third) Export FS cluster resource export3 and
associate it with the same HAVIP havip1.
• On making one of the file systems unavailable on one of the nodes, we will
observe that havip1 executes on the node where all the three file systems
are mounted.
Sangam 15 Anju Garg 49
50. Create and start the third Cloud File System
resource
• Create another Cloud FS file system resource using the volume device VOL3
and the corresponding mount point
Sangam 15 Anju Garg 50
[root@host01 ~]# srvctl add filesystem -m /mnt/acfsmounts/acfs3 -d /dev/asm/vol3-106
[root@host01 ~]# srvctl status filesystem -d /dev/asm/vol3-106
ACFS file system /mnt/acfsmounts/acfs3 is not mounted
[root@host01 ~]# srvctl start filesystem -d /dev/asm/vol3-106
[root@host01 ~]# srvctl status filesystem -d /dev/asm/vol3-106
ACFS file system /mnt/acfsmounts/acfs3 is mounted on nodes host01,host02
51. Create / Start an Export FS cluster resource
export3
• Create an Export FS cluster resource called export3 which will publish the
newly created file system using HANFS. This Export FS resource has also
been associated with HAVIP havip1 .
• Newly created export FS resource export3 executes on host02 where havip1
and other exports associated with it already running.
Sangam 15 Anju Garg 51
[root@host01 ~]# srvctl add exportfs -id havip1 -path /mnt/acfsmounts/acfs3 -name
export3 -options rw -clients *.example.com
[root@host01 ~]# srvctl status exportfs -name export3
export file system export3 is enabled
export file system export3 is not exported
[root@host01 ~]# srvctl start exportfs -name export3
[root@host01 acfs3]# srvctl status exportfs -id havip1
export file system export1 is enabled
export file system export1 is exported on node host02
export file system export2 is enabled
export file system export2 is exported on node host02
export file system export3 is enabled
export file system export3 is exported on node host02
52. Export FS resource export3 executes on host02
where havip1 is already running
Sangam 15 Anju Garg 52
53. Stop all the exports associated with havip1
• Stop all the exports associated with HAVIP havip1 - results in stopping of the
HAVIP havip1 as well.
Sangam 15 Anju Garg 53
[root@host01 acfs3]# srvctl stop exportfs -id havip1 -f
[root@host01 acfs3]# srvctl status exportfs -id havip1
export file system export1 is enabled
export file system export1 is not exported
export file system export2 is enabled
export file system export2 is not exported
export file system export3 is enabled
export file system export3 is not exported
[root@host01 acfs3]# srvctl status havip -id havip1
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is not running
54. Stop all the exports associated with havip1
Sangam 15 Anju Garg 54
55. Stop all the file systems being exported by
havip1
• Stop all the file systems which were being exported by HAVIP havip1
Sangam 15 Anju Garg 55
[root@host01 acfs1]# srvctl stop filesystem -d /dev/asm/vol1-106 -f
[root@host01 acfs1]# srvctl stop filesystem -d /dev/asm/vol2-106 -f
[root@host01 acfs1]# srvctl stop filesystem -d /dev/asm/vol3-106 -f
[root@host01 acfs1]# srvctl status filesystem -d /dev/asm/vol1-106
ACFS file system /mnt/acfsmounts/acfs1 is not mounted
[root@host01 acfs1]# srvctl status filesystem -d /dev/asm/vol2-106
ACFS file system /mnt/acfsmounts/acfs2 is not mounted
[root@host01 acfs1]# srvctl status filesystem -d /dev/asm/vol3-106
ACFS file system /mnt/acfsmounts/acfs3 is not mounted
56. Stop all the file systems being exported by
havip1
Sangam 15 Anju Garg 56
57. Start unequal number of file systems on
host01 / host02
• Start three file systems on host01 and two file systems on host02
Sangam 15 Anju Garg 57
[root@host01 acfs1]# srvctl start filesystem -d /dev/asm/vol1-106
[root@host01 acfs1]# srvctl start filesystem -d /dev/asm/vol2-106
[root@host01 acfs1]# srvctl start filesystem -d /dev/asm/vol3-106 -n host01
[root@host01 acfs1]# srvctl status filesystem -d /dev/asm/vol1-106
ACFS file system /mnt/acfsmounts/acfs1 is mounted on nodes host01,host02
[root@host01 acfs1]# srvctl status filesystem -d /dev/asm/vol2-106
ACFS file system /mnt/acfsmounts/acfs2 is mounted on nodes host01,host02
[root@host01 acfs1]# srvctl status filesystem -d /dev/asm/vol3-106
ACFS file system /mnt/acfsmounts/acfs3 is mounted on nodes host01
58. Start HAVIP resource havip1
• On starting HAVIP resource havip1, it starts on node host01 where all the
three file systems are running.
• This triggers the execution of all the three exports associated with havip1 as
well on the same node (host01)
Sangam 15 Anju Garg 58
[root@host01 acfs1]# srvctl start havip -id havip1
[root@host01 acfs1]# srvctl status havip -id havip1
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is running on nodes host01
[root@host01 acfs1]# srvctl status exportfs -id havip1
export file system export1 is enabled
export file system export1 is exported on node host01
export file system export2 is enabled
export file system export2 is exported on node host01
export file system export3 is enabled
export file system export3 is exported on node host01
59. havip1 starts on node host01 where all
the three file systems are running
Sangam 15 Anju Garg 59
60. Stop the file system underlying export1
on host01
• Let us once again verify that a HAVIP along with the exports associated with
it, always run on the same node.
• Stop the file system underlying export1 on host01.
• Export1 gets relocated to host02 further leading to relocation of havip1 and
the other two exports also to host02.
Sangam 15 Anju Garg 60
[root@host01 acfs1]# srvctl stop filesystem -d /dev/asm/vol1-106 -n host01 -f
[root@host01 acfs3]# srvctl status filesystem -d /dev/asm/vol1-106
ACFS file system /mnt/acfsmounts/acfs1 is mounted on nodes host02
[root@host01 acfs3]# srvctl status havip -id havip1
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is running on nodes host02
[root@host01 acfs3]# srvctl status exportfs] -id havip1
export file system export1 is enabled
export file system export1 is exported on node host02
export file system export2 is enabled
export file system export2 is exported on node host02
export file system export3 is enabled
export file system export3 is exported on node host02
61. Stopping a file system on host01 causes
havip1 and associated exports to relocate to host02
Sangam 15 Anju Garg 61
62. Restart the Cloud File system resource on volume
device VOL1
• Let us restart the Cloud File system resource on volume device VOL1 so that
all the three file systems are available on both the nodes.
Sangam 15 Anju Garg 62
[root@host01 acfs3]# srvctl start filesystem -d /dev/asm/vol1-106
[root@host01 acfs3]# srvctl status filesystem
ACFS file system /mnt/acfsmounts/acfs1 is mounted on nodes host01,host02
ACFS file system /mnt/acfsmounts/acfs2 is mounted on nodes host01,host02
ACFS file system /mnt/acfsmounts/acfs3 is mounted on nodes host01,host02
63. Restart the Cloud File system resource on volume
device VOL1
Sangam 15 Anju Garg 63
64. Summary
• When multiple file systems are exported by a HAVIP resource, that HAVIP, in
order to make the largest number of file systems available to the client(s),
will run on a node in the cluster where the largest number of ACFS file
systems identified with that HAVIP are currently mounted.
• Moreover, as a result of mutual dependencies among various HANFS
resources, a HAVIP and all the Export FS resources associated with it will
always execute on the same node.
Sangam 15 Anju Garg 64
66. Overview
• If there are multiple HAVIP resources in the cluster, they will execute on
various nodes in the cluster so as to load balance across the cluster.
• We will add another HAVIP resource havip2 which exports an ACFS cloud
file system resource on volume device VOL4 by means of an export FS
resource export4
• On starting newly created Export FS resource export4, we will observe
that, export4 and corresponding HAVIP havip2 start on the node which is
not hosting havip1.
Sangam 15 Anju Garg 66
67. Create another HAVIP resource havip2
• Create another HAVIP resource havip2 on a non-pingable , non-DHCP IP
address 192.9.201.186, which exports an ACFS cloud file system resource on
volume device VOL4 by means of an export FS resource export4.
Sangam 15 Anju Garg 67
[root@host01 ~]# srvctl add havip -address 192.9.201.186 -id havip2
[root@host01 ~]# srvctl status havip -id havip2
HAVIP ora.havip2.havip is enabled
HAVIP ora.havip2.havip is not running
[root@host01 ~]# srvctl add filesystem -m /mnt/acfsmounts/acfs4 -d /dev/asm/vol4-106
[root@host01 ~]# srvctl start filesystem -d /dev/asm/vol4-106
[root@host01 ~]# srvctl status filesystem -d /dev/asm/vol4-106
ACFS file system /mnt/acfsmounts/acfs4 is mounted on nodes host01,host02
[root@host01 ~]# srvctl add exportfs -id havip2 -path /mnt/acfsmounts/acfs4 -name
export4 -options rw -clients *.example.com
68. Start Export FS resource export4
• On starting newly resource export4, export4 and corresponding HAVIP
havip2 start on the node host01 which is not already hosting any HAVIP .
Sangam 15 Anju Garg 68
[root@host01 ~]# srvctl start exportfs -name export
[root@host01 acfs3]# srvctl status exportfs
export file system export1 is enabled
export file system export1 is exported on node host02
export file system export2 is enabled
export file system export2 is exported on node host02
export file system export3 is enabled
export file system export3 is exported on node host02
export file system export4 is enabled
export file system export4 is exported on node host01
[root@host01 acfs3]# srvctl status havip
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is running on nodes host02
HAVIP ora.havip2.havip is enabled
HAVIP ora.havip2.havip is running on nodes host01
70. Summary
• The HAVIP resource will execute on a node in the server cluster where
– The largest number of ACFS file systems identified with that resource
group are currently mounted and
– The least number of other HAVIP services are executing in order to load
balance across the cluster for maximum throughput.
Sangam 15 Anju Garg 70
72. Overview
• Various conditions causing HAVIPs and Exports to migrate across the
cluster
• Server cluster membership change events
– Node leaving the cluster
– Node joining the cluster
• Storage Failure
• Failure of Cluster Ready Services
• Planned relocation
• We will simulate above conditions and observe the migration of HAVIPs
and Exports across the cluster
Sangam 15 Anju Garg 72
73. Node leaving the cluster
Stop High Availability Services on host02
• Stop Oracle High Availability Services on node host02 where havip1 and
three exports associated with it are currently running.
• As host02 leaves the cluster, havip1 and three exports associated with it
are moved to host01 , the only other node in the cluster.
Sangam 15 Anju Garg 73
[root@host02 ~]# crsctl stop crs
[root@host01 acfs1]# srvctl status exportfs
export file system export1 is enabled
export file system export1 is exported on node host01
export file system export2 is enabled
export file system export2 is exported on node host01
export file system export3 is enabled
export file system export3 is exported on node host01
export file system export4 is enabled
export file system export4 is exported on node host01
[root@host01 acfs1]# srvctl status havip
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is running on nodes host01
HAVIP ora.havip2.havip is enabled
HAVIP ora.havip2.havip is running on nodes host01
74. Node leaving the cluster
Stop High Availability Services on host02
Sangam 15 Anju Garg 74
75. Node joining the cluster
Start Oracle High Availability Services on host02
• Start Oracle High Availability Services on node host02 so that it rejoins the
cluster.
• As host02 joins the cluster, havip1 and three exports associated with it are
moved to host02 so as to load balance HAVIPs across the cluster.
Sangam 15 Anju Garg 75
[root@host02 ~]# crsctl start crs
[root@host01 acfs1]# srvctl status havip
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is running on nodes host02
HAVIP ora.havip2.havip is enabled
HAVIP ora.havip2.havip is running on nodes host01
[root@host01 acfs1]# srvctl status exportfs
export file system export1 is enabled
export file system export1 is exported on node host02
export file system export2 is enabled
export file system export2 is exported on node host02
export file system export3 is enabled
export file system export3 is exported on node host02
export file system export4 is enabled
export file system export4 is exported on node host01
76. Node joining the cluster
Start Oracle High Availability Services on host02
Sangam 15 Anju Garg 76
77. Storage failure
Stop File System for export4 on host01
• To simulate storage failure, stop File System corresponding to Export FS
resource export4, on host01.
• The Export FS resource export4 and its associated HAVIP havip2 which are
currently running on host01 , are relocated to host02 .
Sangam 15 Anju Garg 77
[root@host01 acfs1]# srvctl stop filesystem -d /dev/asm/vol4-106 -n host01 -f
[root@host01 acfs1]# srvctl status filesystem
ACFS file system /mnt/acfsmounts/acfs1 is mounted on nodes host01,host02
ACFS file system /mnt/acfsmounts/acfs2 is mounted on nodes host01,host02
ACFS file system /mnt/acfsmounts/acfs3 is mounted on nodes host01,host02
ACFS file system /mnt/acfsmounts/acfs4 is mounted on nodes host02
[root@host01 acfs1]# srvctl status havip
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is running on nodes host02
HAVIP ora.havip2.havip is enabled
HAVIP ora.havip2.havip is running on nodes host02
[root@host01 acfs1]# srvctl status exportfs
export file system export1 is enabled
export file system export1 is exported on node host02
export file system export2 is enabled
export file system export2 is exported on node host02
export file system export3 is enabled
export file system export3 is exported on node host02
export file system export4 is enabled
export file system export4 is exported on node host02
79. Planned Relocation
Relocate havip1
• Relocate havip1 which forces the HAVIP and its associated Export FS
resources to migrate from node host02 to host01
Sangam 15 Anju Garg 79
[root@host01 acfs1]# srvctl relocate havip -id havip1 -f
HAVIP was relocated successfully
[root@host01 acfs1]# srvctl status havip
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is running on nodes host01
HAVIP ora.havip2.havip is enabled
HAVIP ora.havip2.havip is running on nodes host02
[root@host01 acfs1]# srvctl status exportfs
export file system export1 is enabled
export file system export1 is exported on node host01
export file system export2 is enabled
export file system export2 is exported on node host01
export file system export3 is enabled
export file system export3 is exported on node host01
export file system export4 is enabled
export file system export4 is exported on node host02
81. Failure of Cluster Ready Services
• As a result of stopping Cluster Ready Services Stack on host01, havip1 and
associated Exports gets relocated to host02
Sangam 15 Anju Garg 81
[root@host01 acfs1]# crsctl stop cluster
[root@host01 acfs1]# srvctl status havip
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is running on nodes host02
HAVIP ora.havip2.havip is enabled
HAVIP ora.havip2.havip is running on nodes host02
[root@host01 acfs1]# srvctl status exportfs
export file system export1 is enabled
export file system export1 is exported on node host02
export file system export2 is enabled
export file system export2 is exported on node host02
export file system export3 is enabled
export file system export3 is exported on node host02
export file system export4 is enabled
export file system export4 is exported on node host02
86. Overview
• If the administrator that prefers more control over the location(s) of
exports, an HAVIP can be configured to only run on certain nodes by the
use of 'disable' and 'enable' commands.
• In order to have the file systems exported over HAVIP havip1 to run only
on node host02, we will disable havip1 on host01 to limit the resource .
• As a result, havip1 and its associated exports will run on host02 even
though it is already hosting havip2.
Sangam 15 Anju Garg 86
87. Stop and Disable havip1 on host01
• Stop and disable havip1 on host01 so that havip1 will not execute on host01.
Sangam 15 Anju Garg 87
[root@host01 acfs1]# srvctl stop havip -id havip1 -f
[root@host01 acfs1]# srvctl status havip
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is not running
HAVIP ora.havip2.havip is enabled
HAVIP ora.havip2.havip is running on nodes host02
[root@host01 acfs1]# srvctl disable havip -node host01 -id havip1
88. Start havip1
• Start havip1. As it is disabled on host01, havip1 and its associated exports
run on host02 even though it is already hosting havip2
Sangam 15 Anju Garg 88
[root@host01 acfs1]# srvctl start havip -id havip1
[root@host01 acfs1]# srvctl status havip
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is disabled on nodes host01
HAVIP ora.havip1.havip is running on nodes host02
HAVIP ora.havip2.havip is enabled
HAVIP ora.havip2.havip is running on nodes host02
[root@host01 acfs1]# srvctl status exportfs
export file system export1 is enabled
export file system export1 is exported on node host02
export file system export2 is enabled
export file system export2 is exported on node host02
export file system export3 is enabled
export file system export3 is exported on node host02
export file system export4 is enabled
export file system export4 is exported on node host02
90. Summary
• The administrator can exercise more control over the location(s)
of exports by configuring HAVIPs to only run on certain nodes by
the use of 'disable' and 'enable' commands.
Sangam 15 Anju Garg 90
91. Guidelines for HANFS Configuration
• Guidelines for ensuring that HANFS provides the maximum scalability and
availability
– Since HANFS relies on NFS, standard NFS configuration and
performance tuning is applicable to the Oracle RAC HANFS product.
– While maintaining a single HAVIP may make management easier, in
case of manual relocation of HAVIP, it may increase the time for file
systems to again begin processing and be available for clients.
– Export(s) that require maximum availability should be configured to
have their own HAVIP(s).
– Heavily used file systems should be configured to have their own
HAVIP(s) so that their intense throughput is isolated from other
HAVIPs.
– Some performance benefits might be achieved if the number of
HAVIPs equal the number of nodes in the Oracle RAC HANFS Cluster,
so that each node hosts one HAVIP.
– HAVIPs should be configured so that the estimated throughput on all
attached Export FS resources is roughly similar for all HAVIPs.
Sangam 15 Anju Garg 91
92. Files Supported by ACFS
• Oracle database data files (new in 12c)
• Binaries
– Oracle database home binaries
– Middleware binaries
– Application binaries
• Database trace files and alert logs
• Application reports
• BFILEs, Video, Audio, Text, Images
• Engineering drawings
• Other general-purpose application file data
Sangam 15 Anju Garg 92
93. Potential Use Cases
• Oracle HANFS may be used in many different ways with other Oracle ACFS
features.
• Examples:
– To provide highly available network file systems to clients enabling
simple sharing of file systems across the network
– Business data and unstructured data such as scanned documents,
image files, and BLOB data types can be stored in an Oracle ACFS file
system as an alternative to storing them inside a database and
exported to clients via Oracle RAC HANFS
– An ACFS snapshot of an ACFS file system exported via Oracle RAC
HANFS can serve as a backup
– An Oracle Database Home can be Exported over Oracle RAC HANFS so
that it is always available
– Snapshots can also be used to create read-write snapshots of Oracle
Database Homes on ACFS to simplify out-of-place patching.
Sangam 15 Anju Garg 93
94. Potential Use Cases
– For a database with its data files on ACFS, snapshots may be used for
backup as well as provisioning sparse copies of databases for testing
and other purposes.
– To provide unified logging structure for RAC: All log files, trace files
and audit files for various instances of RAC databases can be stored on
the same ACFS file system and exported over HANFS so that they are
always available to the clients.
– In case of MAA configuration, entire database for the standby
database can be stored on ACFS and exported over HANFS to multiple
clients. In effect, it is equivalent to configuring multiple physical
standbys with only one stream of redo being transported to the
standby.
– An Oracle RAC HANFS Export FS can be configured with Oracle ACFS
security Realms so that it is read-only during certain time periods to
prevent unauthorized access.
Sangam 15 Anju Garg 94
95. Potential Use Cases
– Oracle RAC HANFS can be used with ASM disk groups configured with
normal or high redundancy so that the underlying storage would be
extremely fault tolerant. This would effectively remove the possibility
of storage failure for a single disk group, while Oracle RAC HANFS
would allow the export itself to be always available, creating a single
extremely highly available file server.
• The list is endless.
Sangam 15 Anju Garg 95
96. Conclusion
• From Oracle Database 12c R1 onwards, files stored on ACFS in a clustered
configuration can be accessed by server(s) outside the cluster using NFS
and NFS-exported paths are highly available.
• A given set of associated HAVIP and Export FS resources is managed as a
group by Oracle Clusterware so that a HAVIP and multiple Export FS
resources associated with it always execute on the same node.
• In order to load balance across the cluster for maximum throughput, the
HAVIP resource will execute on a node in the server cluster where
– The largest number of ACFS file systems identified with that resource
group are currently mounted and
– The least number of other HAVIP services are executing
• Location of Exports in the Cluster can be controlled by disabling / enabling
associated havip on specific nodes.
• HANFS over Oracle ACFS may be used in many different ways with other
Oracle ACFS features.
Sangam 15 Anju Garg 96
Based on what we have learnt till now, we can say that
the HAVIP resource will execute on a node in the server cluster where
the largest number of ACFS file systems identified with that resource group are currently mounted and
the least number of other HAVIP services are executing in order to load balance across the cluster for maximum throughput.