Oracle ACFS High Availability NFS Services (HANFS) Part-I
1. Resiliency
For the Complete Technology & Database Professional Q3-15
SELECTJournal
www.ioug.org
Learning, adapting and growing
from failure.
More In-Memory Magic
Oracle ACFS
High Availability NFS
Services (HANFS)
Facing the Mobilegeddon
with Oracle WebCenter Sites
DBA 201: SSH …
Keys to the Kingdom
Features:
2. 16 ◾ Q3-15 www.ioug.orgH
To satisfy the hunger of big data and the Internet of Things (IoT) for the
massive amounts of normal file system storage that will be required to
capture all related data structures, it’s extremely likely that Oracle DBAs
will soon have to deal with these demands in the immediate future.
Many IT organizations have decided to adopt an open stack storage
approach, while others are still evaluating whether NAS, NFS or other
file system storage mechanisms will be able to handle the upcoming
flood of requests for extensive data storage.
The good news for Oracle DBAs is that Oracle Database 12c R1 now
offers the ability to leverage ACFS as a high availability NFS file
system (HANFS). This means that files stored on ACFS in a clustered
configuration can be accessed by server(s) outside the cluster using
NFS. NFS-exported paths are highly available, so if the node hosting
the export crashes, the NFS service will not be interrupted and the
NFS client will be still able to access the ACFS file system.
This article discusses various prerequisites and features of HANFS
and demonstrates how to configure HANFS for an ACFS file system
by means of an export FS resource associated with a HAVIP. It will
also explore how ADVM/ACFS can leverage Flex ASM features.
What Is a Cloud File System?
A cloud file system allows multiple clients to easily share remotely
stored data and collaborate on projects in real time. It helps
organizations and individuals to rapidly deploy applications and
outsource massive amounts of structured as well as unstructured
data to external cloud providers leading to cost savings, lower
management overhead and rapid elasticity.
Oracle Cloud File System
Oracle’s Cloud File System is designed to manage Oracle database
files (12c R1 onward) as well as general purpose files outside of an
Oracle database across multiple operating system platforms. Oracle
Cloud FS allows you to have a single instance/clustered file system
built on an ASM foundation, and it includes ASM Dynamic Volume
Manager (ADVM) and the ASM Cluster File System (ACFS):
◾◾ ASM Dynamic Volume Manager (ADVM) is a volume manager
designed for both cluster and single-host volumes and provides a
standard disk device driver interface to clients. File systems and
other disk-based applications send I/O requests to Oracle ADVM
Oracle ACFS High Availability NFS
Services (HANFS)
By Anju Garg ◾ Jim Czuprynski, Editor
Many IT organizations have decided to adopt an open stack storage approach, while
others are still evaluating whether NAS, NFS or other file system storage mechanisms
will be able to handle the upcoming flood of requests for extensive data storage.
To satisfy the hunger of big data and the Internet of Things (IoT)
for the massive amounts of normal file system storage that will be
required to capture all related data structures, it’s extremely likely
that Oracle DBAs will soon have to deal with these demands in
the immediate future.
3. www.ioug.org Q315 ◾ 17H
volume devices just as they would to other storage devices on a
vendor operating system.
◾◾ Automatic Storage Management Cluster File System (ACFS)
is a general-purpose POSIX, X/OPEN and Microsoft Windows
compliant file system designed for single node as well as
clustered configuration. An ACFS file system is created on top of
an ADVM dynamic volume. Because these dynamic volumes are
essentially ASM files stored within an ASM disk group, they are
able to leverage and benefit from all the powerful ASM features
like striping, mirroring, rebalancing, fast resync and even Flex
ASM (Oracle Database 12c onward). Oracle ACFS offers various
advanced data services like snapshot, replication, tagging,
security, encryption, auditing etc. Starting with Oracle Database
12c R1, Oracle ACFS supports Oracle database files in addition
to general purpose files.
ACFS Configurations
Oracle ACFS supports both Oracle Restart (i.e., single instance
non-clustered) and Oracle Grid Infrastructure clustered
configurations. Data stored on an ACFS file system can also be
accessed by network file system protocols such as NFS (Network File
System) or Microsoft’s CIFS (Common Internet File System). Here are
some examples of supported ACFS configurations:
◾◾ Single Instance Non-Clustered Configuration. Files stored
on ACFS (non-shared storage) are available to just a single node.
◾◾ Single Instance Non-Clustered Configuration with NFS/
CIFS Network File Servers. Files stored on ACFS (non-shared
storage) are exported to clients over the network (e.g., database
home stored on ACFS on a server can be made available to other
server(s) over NFS or CIFS).
◾◾ Oracle Grid Infrastructure Cluster Configuration. Files
stored on ACFS, which are located within shared storage, can be
directly accessed from any of the nodes in the cluster.
◾◾ Oracle Grid Infrastructure Cluster Configuration with NFS/
CIFS Network File Servers. Files stored on ACFS (i.e., shared
storage) can be accessed from any of the nodes in the cluster as
well as by server(s) outside the cluster using network file system
protocols such as NFS or CIFS.
Note: For the sake of simplicity, this article will focus on configuring
ACFS in a clustered configuration with a NFS client.
Pre-12c ACFS Cluster Configuration with NFS:
Limitations
ACFS cluster configuration with NFS was available prior to Oracle
Database 12c R1; however, NFS-exported paths were not highly
available. In other words, if the node hosting the export crashed, the
NFS service was interrupted and the NFS client could not access the
ACFS file system any longer.
ACFS Cluster Configuration with Highly Available
NFS (HANFS)
With Oracle Database 12c R1, Cloud FS includes Highly Available
NFS (HANFS) over ACFS, which enables highly available NFS servers
to be configured using Oracle ACFS clusters. The NFS exports are
exposed through Highly Available VIPs (HAVIPs), which allows
Oracle’s Clusterware agents to ensure that HAVIPs and NFS exports
are always available. If the node hosting the export(s) fails, the
corresponding HAVIP and its corresponding NFS export(s) will
automatically fail over to one of the surviving nodes so that the NFS
client continues to receive uninterrupted service of NFS exported
paths. The basic architecture of HANFS is shown in Figure 1.
Figure 1: Highly Available Network File System (HANFS)
HANFS: Points, Benefits and Drawbacks
It should be obvious that HANFS offers some extremely powerful
capabilities, but there are some important prerequisites to consider:
◾◾ HANFS requires NFS in order to run and relies on the base
operating system to provide all NFS-related functionality (such as
the NFS server and NFS client utilities).
◾◾ NFS needs to be running on each node that can host the
HANFS services.
◾◾ Oracle ACFS HANFS 12.1 works in conjunction with NFS v2 and
v3 over IPV4.
◾◾ While base NFS supports file locking, HANFS does not support
NFS file locking.
◾◾ HANFS is not supported in Oracle restart configurations.
◾◾ HANFS for Oracle Grid Infrastructure operates only with Oracle
ACFS file systems configured for clusterwide accessibility and
does not support Oracle ACFS file systems configured for access
on particular subsets of cluster nodes.
◾◾ HANFS is not supported with non-Oracle ACFS file systems.
◾◾ When multiple file systems are associated with a HAVIP, that
HAVIP will run on the node in the cluster where the largest
number of file systems are available.
4. 18 ◾ Q3-15 www.ioug.orgH
Platform /
Operating System
Versions Supported
AIX AIX v6.1 or later
Solaris Solaris 11 GA or later, X64 and Sparc64
Linux ◾◾ Red Hat Enterprise Linux v5.3 and
later or v6.0 and later (requires nfs-
utils-1.0.9-60 or later)
◾◾ Oracle Enterprise Linux v5.3 and later
or v6.0 and later, with the Unbreakable
Enterprise Kernel or the Red Hat
Compatible Kernel (requires nfs-
utils-1.0.9-60 or later)
◾◾ SUSE Enterprise Server v11
or later (requires nfs-kernel-
server-1.2.1-2.24.1 or later)
Table 1: HANFS: Supported Platforms
and Operating Systems
HANFS Component Resources
In addition to ACFS/ADVM and ASM, HANFS also relies on the
following Oracle Database 12c R1 Clusterware resources:
◾◾ File System. An ACFS file system is an Oracle Clusterware
resource that you want to expose through HANFS. This ACFS
file system should be configured so that it will be mounted on
all nodes.
◾◾ Highly Available VIP (HAVIP). The HAVIP resource is a special
class of the standard Oracle node VIP Oracle Clusterware
resource. Each HAVIP resource manages a unique IP address
in the cluster on a single node at any time, and this global
resource will be relocated to an active node in the cluster as
necessary with the objective of providing uninterrupted service
of NFS exported paths to its client(s). This HAVIP resource
cannot be started until at least one file system export resource
has been created for it.
◾◾ Export FS. An Export Filesystem (FS) resource is a cluster
resource that publishes one or more designated ACFS file
systems to client(s) using HANFS. An Export FS resource is
associated with a HAVIP; together, they provide uninterrupted
service of NFS exported paths to the cluster’s client(s). If an
exported ACFS file system becomes unavailable on its current
cluster node, then Oracle Clusterware will automatically relocate
both the associated Export FS and HAVIP resources to another
node in the cluster where largest number of file systems
associated with the HAVIP are available so that all file systems
associated with the HAVIP will be exported from that node.
HANFS Client Usage
After a HANFS cluster service has been configured via a HAVIP and
associated Export FS resources, a client node can issue a request to
◾◾ Whenever a node joins or leaves the cluster, Oracle Clusterware
relocates HAVIPs so as to load balance them across the cluster.
HANFS: Flex ASM and Flex Cluster
HANFS is a feature of Oracle Database 12c; thus, it is important
to mention two of its new features — Flex ASM and Flex
Cluster — which are relevant in this context.
Prior to Oracle Database 12c, an ASM client (a database instance
or an ASM Cluster File System (ACFS)) can connect only to an
ASM instance running on the same host. If an ASM instance were
to fail, then the shared disk groups and hence ACFS file systems
cannot be accessed on that node any longer. With introduction of
Flex ASM in 12c, the hard dependency between ASM and its
clients has been relaxed and only a smaller number of ASM
instances need to run on a subset of servers in a cluster. ASM
clients on the nodes where an ASM instance is not running
connect to other ASM instances over a network to fetch the
metadata. Moreover, if an ASM instance fails, then its active
clients can fail over to another surviving ASM instance on a
different server resulting in uninterrupted availability of shared
storage and corresponding ACFS file systems.
Oracle Flex Clusters contain two types of nodes arranged in a hub
and spoke architecture: hub nodes and leaf nodes. Hub nodes, just
like the nodes in any pre-12c cluster, are tightly coupled to the
cluster and have direct access to shared storage. Leaf nodes are
more loosely coupled to the cluster than hub nodes; they do not
require direct access to shared storage but instead request data
through hub nodes.
Flex ASM can be configured on either a standard cluster or a Flex
Cluster. When Flex ASM runs on a standard cluster, ASM services
can run on a subset of cluster nodes servicing clients across the
cluster. When Flex ASM runs on a Flex Cluster, ASM services can
run on a subset of hub nodes servicing clients across all of the
hub nodes in the Flex Cluster. Therefore, in a Flex Cluster only hub
nodes can host the HANFS services because only hub nodes have
direct access to storage. Because ADVM/ACFS utilize an ADVM
proxy instance to connect to Flex ASM, an ADVM proxy instance
must be running on each hub node that can host HANFS services.
See Table 1 for a list of the operating systems and platforms that
HANFS supports.
Click here to view an article on
leveraging Flex ASM.
5. www.ioug.org Q315 ◾ 19H
[root@host01 ~]# service nfs status
rpc.mountd (pid 4921) is running...
nfsd (pid 4910 4909 4908 4907 4906 4905 4904 4903) is running...
rpc.rquotad (pid 4882) is running...
[root@host02 ~]# service nfs status
rpc.mountd (pid 4985) is running...
nfsd (pid 4982 4981 4980 4979 4978 4977 4976 4975) is running...
rpc.rquotad (pid 4941) is running...
Listing 3: Verifying Availability of NFS Services
Step 2: Create ADVM Volumes
As shown in Listing 4, we’ll next modify the compatible.advm
attribute of the DATA ASM disk group to enable all the new ASM
Dynamic Volume (ADVM) features included in release 12.1, then
create four new volumes — VOL1, VOL2, VOL3 and VOL4 — within
the DATA disk group with a volume size of 300 MB each.
[grid@host01 root]$ asmcmd setattr -G DATA compatible.advm 12.1.0.0.0
[grid@host01 root]$ asmcmd volcreate -G DATA -s 300m VOL1
asmcmd volcreate -G DATA -s 300m VOL2
asmcmd volcreate -G DATA -s 300m VOL3
asmcmd volcreate -G DATA -s 300m VOL4
Listing 4: Creating New ADVM Volumes
Listing 5 shows how to examine the newly created volumes and take
note of the volume devices associated with them.
[grid@host01 root]$ asmcmd volinfo -G DATA VOL1
Diskgroup Name: DATA
Volume Name: VOL1
Volume Device: /dev/asm/vol1-106
State: ENABLED
Size (MB): 320
Resize Unit (MB): 32
Redundancy: MIRROR
Stripe Columns: 4
Stripe Width (K): 128
Usage: ACFS
Mountpath:
[grid@host01 root]$ asmcmd volinfo -G DATA VOL2
Diskgroup Name: DATA
Volume Name: VOL2
Volume Device: /dev/asm/vol2-106
State: ENABLED
Size (MB): 320
Resize Unit (MB): 32
Redundancy: MIRROR
Stripe Columns: 4
Stripe Width (K): 128
Usage:
Mountpath:
[grid@host01 ~]$ asmcmd volinfo -G DATA VOL3
Diskgroup Name: DATA
Volume Name: VOL3
Volume Device: /dev/asm/vol3-106
State: ENABLED
Size (MB): 320
Resize Unit (MB): 32
Redundancy: MIRROR
Stripe Columns: 4
Stripe Width (K): 128
Usage:
Mountpath:
[grid@host01 ~]$ asmcmd volinfo -G DATA VOL4
Diskgroup Name: DATA
mount the corresponding Export FS path by referencing the HAVIP.
After mounting the file system successfully, applications executing
on the client node can access files from the exported ACFS file
system. During relocation of a HAVIP resource and its associated
Export FS resources, the client may notice a delay while the NFS
connection is re-established, but will be able to resume operation
without any client-side interaction.
HANFS: A Demonstration
To illustrate how to configure and leverage HANFS, including how to
enable ADVM/ACFS to leverage Flex ASM and the various
dependencies between HANFS component resources, we will utilize
an ASM Flex Cluster that is configured with two hub nodes (host01
and host02). Our first set of tasks is create an ACFS file system
resource, an Export FS resource and a HAVIP resource.
Step 1: Check Prerequisites
First, let’s verify that all kernel modules needed for ACFS and ADVM
are loaded on all nodes, as shown in Listing 1.
[root@host01 ~]# lsmod |grep oracle
oracleacfs 2837904 1
oracleadvm 342512 1
oracleoks 409560 2 oracleacfs,oracleadvm
oracleasm 84136 1
[root@host02 ~]# lsmod |grep oracle
oracleacfs 2837904 1
oracleadvm 342512 1
oracleoks 409560 2 oracleacfs,oracleadvm
oracleasm 84136 1
Listing 1: Verifying ACFS and ADVM Kernel Modules
The ASM Dynamic Volume Manager (ADVM) proxy instance is a
special Oracle instance that enables ADVM to connect to Flex
ASM and is required to run on the same node as ADVM and ACFS.
For a volume device to be visible on a node, an ASM proxy
instance must be running on that node. We’ll verify this with the
code shown in Listing 2.
[root@host02 ~]# crsctl stat res ora.proxy_advm -t
----------------------------------------------------------------
Name Target State Server State details
----------------------------------------------------------------
Local Resources
----------------------------------------------------------------
ora.proxy_advm
ONLINE ONLINE host01 STABLE
ONLINE ONLINE host02 STABLE
Listing 2: Verifying ASM Proxy Instances
Because HANFS requires a running NFS service on each node that
can host the HANFS services, we’ll next verify if the NFS service is
running on both nodes, as shown in Listing 3.
6. 20 ◾ Q3-15 www.ioug.orgH
Volume Name: VOL4
Volume Device: /dev/asm/vol4-106
State: ENABLED
Size (MB): 320
Resize Unit (MB): 32
Redundancy: MIRROR
Stripe Columns: 4
Stripe Width (K): 128
Usage:
Mountpath:
Listing 5: Verifying New ADVM Volumes
Step 3: Create ACFS File Systems and
Corresponding Mount Points
Next, we will construct an ACFS file system on each of the newly
created volumes, as shown in Listing 6. And then we’ll create new
mount points on all nodes that will be mounting the ACFS file
system, as Listing 7 shows.
[root@host01 ~]# mkfs -t acfs /dev/asm/vol1-106
mkfs -t acfs /dev/asm/vol2-106
mkfs -t acfs /dev/asm/vol3-106
mkfs -t acfs /dev/asm/vol4-106
Listing 6: Creating ACFS File Systems
root@host01 ~]# mkdir -p /mnt/acfsmounts/acfs1
mkdir -p /mnt/acfsmounts/acfs2
mkdir -p /mnt/acfsmounts/acfs3
mkdir -p /mnt/acfsmounts/acfs4
root@host02 ~]# mkdir -p /mnt/acfsmounts/acfs1
mkdir -p /mnt/acfsmounts/acfs2
mkdir -p /mnt/acfsmounts/acfs3
mkdir -p /mnt/acfsmounts/acfs4
Listing 7: Creating ACFS File System Mount Points
Now, we will configure various HANFS resources (i.e., Cloud File
System, HAVIP and Export File System).
Step 4: Configure Cloud File System Resource for
ACFS File System
Now it’s time to create an Oracle Cloud File System resource and
verify that the Cloud File System is working correctly. Listing 8 shows
the srvctl commands to create a Cloud File System resource using
the volume device VOL1 along with the mount point /mnt/
acfsmounts/acfs1.
[root@host01 ~]# srvctl add filesystem -m /mnt/acfsmounts/acfs1 -d /dev/asm/vol1-106
[root@host01 ~]# srvctl status filesystem -d /dev/asm/vol1-106
ACFS file system /mnt/acfsmounts/acfs2 is not mounted
[root@host01 ~]# mount | grep vol1
[root@host02 ~]# mount | grep vol1
Listing 8: Creating a Cloud File System Resource
Note that the ACFS file system will remain unmounted until the file
system resource is actually started, as Listing 9 shows.
[root@host01 ~]# srvctl start filesystem -d /dev/asm/vol1-106
[root@host01 ~]# srvctl status filesystem -d /dev/asm/vol1-106
ACFS file system /mnt/acfsmounts/acfs2 is mounted on nodes host01,host02
[root@host01 ~]# mount | grep vol1
/dev/asm/vol1-106 on /mnt/acfsmounts/acfs1 type acfs (rw)
[root@host02 ~]# mount | grep vol1
/dev/asm/vol1-106 on /mnt/acfsmounts/acfs1 type acfs (rw)
Listing 9: Starting a Cloud File System Resource
Step 5: Verification of Cloud File System Resource
To confirm that the new Cloud File System is indeed working
properly and accessible from each node, we’ll create a small text file
inside it from host01 and then access that file from host02, as
shown in Listings 10 and 11, respectively.
[root@host01 ~]# echo “Test File on ACFS” > /mnt/acfsmounts/acfs1/testfile.txt
Listing 10: Creating New Test File
[root@host02 asm]# cat /mnt/acfsmounts/acfs1/testfile.txt
Test File on ACFS
Listing 11: Accessing Test File
Finally, as Listing 12 shows, we’ll modify access privileges for this
new file so that any user can access it.
[root@host01 ~]# ls -l /mnt/acfsmounts/acfs1/testfile.txt
-rw-r--r-- 1 root root 24 May 2 13:51 /mnt/acfsmounts/acfs1/testfile.txt
[root@host01 ~]# chmod 777 /mnt/acfsmounts/acfs1/testfile.txt
[root@host01 ~]# ls -l /mnt/acfsmounts/acfs1/testfile.txt
-rwxrwxrwx 1 root root 24 May 2 13:51 /mnt/acfsmounts/acfs1/testfile.txt
[root@host02 asm]# ls -l /mnt/acfsmounts/acfs1/testfile.txt
-rwxrwxrwx 1 root root 24 May 2 13:51 /mnt/acfsmounts/acfs1/testfile.txt
Listing 12: Changing ACLs for Test File
At this point, we have created and tested a new Cloud File System
Resource. Now we will create HAVIP and Export FS resources and
make this file system available to a client server using HANFS.
Step 6: Configure a HAVIP Resource
Our next step is to create a HAVIP resource called havip1 on a
non-pingable, non-DHCP IP address (e.g., 192.9.201.184). Listing 13
shows the srvctl commands to create and configure this resource.
root@host01 ~]# srvctl add havip -address 192.9.201.184 -id havip1
[root@host01 ~]# srvctl config havip -id havip1
HAVIP exists: /havip1/192.9.201.184, network number 1
Description:
Listing 13: Creating a HAVIP Resource
7. www.ioug.org Q315 ◾ 21H
If we try to start the newly created HAVIP resource havip1, it fails
because a HAVIP resource has a hard dependency on resource type
ora.havip1.export.type and hence requires at least one Export FS
configured and associated with it, as the example in Listing
14 shows.
[root@host01 ~]# crsctl stat res ora.havip1.havip
NAME=ora.havip1.havip
TYPE=ora.havip.type
TARGET=OFFLINE
STATE=OFFLINE
[root@host01 ~]# srvctl status havip -id havip1
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is not running
[root@host01 ~]# srvctl start havip -id havip1
PRCR-1079 : Failed to start resource ora.havip1.havip
CRS-2805: Unable to start 'ora.havip1.havip' because it has a 'hard' dependency on resource type
'ora.havip1.export.type' and no resource of that type can satisfy the dependency
[root@host01 ~]# crsctl stat res ora.havip1.havip -f | grep DEPENDENCIES
START_DEPENDENCIES=hard(ora.net1.network,uniform:type:ora.havip1.export.type) weak(global:ora.
gns) attraction(ora.data.vol1.acfs) dispersion:active(type:ora.havip.type) pullup(ora.net1.
network) pullup:always(type:ora.havip1.export.type)
STOP_DEPENDENCIES=hard(intermediate:ora.net1.network,uniform:intermediate:type:ora.havip1.
export.type)
Listing 14: Failure When Starting HAVIP Without Export FS
Step 7: Configure an Export File System Resource
Our next step is to create an Export FS cluster resource called export1
associated with HAVIP havip1. This Export FS resource publishes the
specified file system using HANFS. As shown in Listing 15, we used
the following options for the srvctl add exportfs command:
◾◾ id havip1: the HAVIP resource used to export the file system
◾◾ path/mnt/acfsmounts/acfs1: the file system being exported
◾◾ name export1: the name used to identify the Export FS resource
◾◾ options rw: the NFS options for the exported file system
◾◾ clients*.example.com: the clients permitted to access the
exported file system.
[root@host01 ~]# srvctl add exportfs -id havip1 -path /mnt/acfsmounts/acfs1 -name export1
-options rw -clients *.example.com
[root@host01 ~]# srvctl status exportfs -name export1
export file system export1 is enabled
export file system export1 is not exported
[root@host01 ~]# srvctl config exportfs -name export1
export file system export1 is configured
Exported path: /mnt/acfsmounts/acfs1
Export Options: rw
Configured Clients: *.example.com
Listing 15: Creating an Export FS Resource
As Listing 16 shows, our next step is to start the newly created
Export FS resource named export1.We’ll also confirm its successful
startup and verify that an Export FS has been exported on one of the
nodes in our cluster:
[root@host01 ~]# srvctl start exportfs -name export1
[root@host01 ~]# srvctl status exportfs -name export1
export file system export1 is enabled
export file system export1 is exported on node host02
Listing 16: Starting Export FS Resource
Let’s also confirm that HAVIP resource havip1 is automatically
started on the same host (host02) as Export FS resource export1, as
shown in Listing 17.
[root@host01 ~]# srvctl status havip
HAVIP ora.havip1.havip is enabled
HAVIP ora.havip1.havip is running on nodes host02
[root@host01 ~]# crsctl stat res ora.havip1.havip -t
----------------------------------------------------------------
Name Target State Server State details
----------------------------------------------------------------
Cluster Resources
----------------------------------------------------------------
ora.havip1.havip
1 ONLINE ONLINE host02 STABLE
----------------------------------------------------------------
[root@host02 asm]# ifconfig |grep 184
inet addr:192.9.201.184 Bcast:192.9.201.255 Mask:255.255.255.0
Listing 17: Verifying Status of HAVIP Resource
Step 8: Mount the HANFS Exported File System
Almost done! Let’s now verify that the HANFS Exported File System
can be mounted by the client. As Listing 18 shows, we will attempt to
mount the HANFS exported file system on a HANFS client server
named server1.
[root@server1 ~]# mkdir -p /mnt/hanfs1
[root@server1 ~]# mount -t nfs 192.9.201.184:/mnt/acfsmounts/acfs1 /mnt/hanfs1
[root@server1 ~]# mount |grep hanfs1
192.9.201.184:/mnt/acfsmounts/acfs1 on /mnt/hanfs1 type nfs (rw,addr=192.9.201.184)
[root@server1 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda5 71241024 6221700 61342048 10% /
/dev/sda3 4956316 141360 4559124 4% /tmp
/dev/sda1 101086 11378 84489 12% /boot
tmpfs 200776 0 200776 0% /dev/shm
192.9.201.184:/mnt/acfsmounts/acfs1
327680 144384 183296 45% /mnt/hanfs1
Listing 18: Mounting HANFS Exported File System on Client
To confirm the success of the HANFS mount, we will simply verify the
contents of the text file created earlier, as Listing 19 demonstrates.
[root@server1 ~]# cat /mnt/hanfs1/testfile.txt
Test File on ACFS
Listing 19: Viewing Test File on HANFS Mount Point
8. 22 ◾ Q3-15 www.ioug.orgH
Next Steps
We’ve successfully demonstrated the rudiments of how to configure
and utilize HANFS for an ACFS file system by means of an Export FS
resource associated with a Highly Available VIP (HAVIP). In the
subsequent articles in this series, we will:
◾◾ Demonstrate the mutual dependencies among various HANFS
component resources.
◾◾ Configure multiple exports associated with multiple HAVIPs.
◾◾ Delve deeply into the dependencies among HANFS
component resources.
◾◾ Demonstrate what happens during migration of exports and
HAVIP’s across the cluster.
◾◾ Discuss several potential practical use cases for this feature.
References
Oracle ACFS File System Resource Management
Oracle Database 12c SRVCTL Command Reference, August 2013
Helmut’s RAC / JEE Blog
Oracle Database 12c Clusterware Administration and Deployment Guide,
August 2013
Oracle Database 12c Automatic Storage Management Guide, August 2013
Benefits of Oracle ACFS White Paper, January 2015
OOW 2011 presentation Managing Storage in Private Clouds with Oracle Cloud
File System
Oracle ACFS White Paper, July 2014
Oracle White Paper on “Highly Available NFS over Oracle ASM Cluster File
System (ACFS)”
Oracle Database 12c Automatic Storage Management Guide, July 2014
Contact
Anju Garg is an Oracle ACE associate with over 12 years of experience in the IT industry
in various roles. Since 2010, she has trained over 100 DBAs from across the world
in various core DBA technologies such as RAC, Data Guard, Performance Tuning, SQL
statement tuning, database administration and more.
Upcoming Topics in Q3 and Q4:
¢ Business Intelligence ¢ In Memory
¢ Women in Technology ¢ Mobile
¢ Virtualization/VMware ¢ Exadata
¢ Cloud Computing ¢ IoT
¢ Storage
#IOUGenius is always looking for guest bloggers who want to get
their name out there, work on professional development,
and share their knowledge and insight with the rest of
the IOUG community! Interested? Learn more about
#IOUGenius blog requirements and email us at
iougenius@ioug.org with any questions, comments,
concerns or potential blog posts.
BECOME AN
#IOUGenius
Featured Blogger