Mais conteúdo relacionado Mais de Markus Michalewicz (20) Oracle RAC (12.1.0.2) Operational Best Practices2. Oracle RAC 12c (12.1.0.2) Operational Best Practices
Markus Michalewicz
Director of Product Management
Oracle Real Application Clusters (RAC)
October 1st, 2014
@OracleRACpm
http://www.linkedin.com/in/markusmichalewicz
http://www.slideshare.net/MarkusMichalewicz
3. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Safe Harbor Statement
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.
3 4. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
4
Operational Best Practices
Use Case
Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage
OS
Network
Cluster
DB
Update
Installation
t
SI
http://www.slideshare.net/MarkusMichalewicz/oracle-rac-12c-collaborate-best-practices-ioug-2014-version 5. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Program Agenda
1
2
3
4
New in Oracle RAC 12.1.0.2 (Install)
Operational Best Practices for
Generic Clusters
Extended Cluster
Dedicated Environments
Consolidated Environments
Appendices A – D
5
5 6. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Program Agenda
1
2
3
4
New in Oracle RAC 12.1.0.2 (Install)
Operational Best Practices for
Generic Clusters
Extended Cluster
Dedicated Environments
Consolidated Environments
Appendices A – D
6
5 7. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
12.1.0.1 Grid Infrastructure Management Repository (GIMR)
12.1.0.2 Grid Infrastructure Management Repository (GIMR)
•Single Instance Oracle Database 12c Container Database with one PDB
–The resource is called “ora.mgmtdb”
–Future consolidation planned
–Installed on one of the (HUB) nodes
–Managed as a failover database
–Stored in the first ASM disk group created
7
New in 12.1.0.2: GIMR – No Choice Anymore 8. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
12.1.0.1 disk group creation: start with “GRID” disk group
12.1.0.2 disk group creation: start with GIMR hosting disk group
•GIMR typically does not require redundancy for the disk group.
–Hence, do not share with GRID DG.
•Clusterware files (Voting Files and OCR) are easy to relocate
–See example in Appendix A.
•More information:
–How to Move GI Management Repository to Different Shared Storage (Diskgroup, CFS or NFS etc) (Doc ID 1589394.1)
–Managing the Cluster Health Monitor Repository (Doc ID 1921105.1)
8
Recommendation Change in Disk Group Creation
More Information in Appendix A 9. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
•ACFS is Free of Charge!
–All functionality for non-database files; no exception.
–For database files, all ACFS functionality; the following exceptions apply:
•Snapshots: require DB EE
•Replication, Encryption, Security, and Auditing: not available for DB files
–Respective DB functionality should be used instead (e.g. Advanced Security Option)
•ACFS Support for Exadata Systems (Linux only)
•12.1.0.2 supports the following database versions on ACFS on Exadata Database Machines:
–Oracle Database 10g Rel. 2 (10.2.0.4 and 10.2.0.5)
–Oracle Database 11g (11.2.0.4)
–Oracle Database 12c (12.1.0.1+)
•More information:
–“My Oracle Support” (MOS) Note 1929629.1 – Oracle ACFS Support on Oracle Exadata Database Machine (Linux only)
9
•Test & Dev Database Management made simple with gDBclone
•gDBClone is a simple sample script that leverages Oracle ACFS snapshot functionality to create space efficient copies for the management of Test and Dev Oracle Databases.
ACFS News 10. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
10
Simplified ACFS Licensing
Oracle ACFS Features
Oracle Database Files
Non-Oracle Database Files
ACFS features other than those listed below
FREE
FREE
Snapshots
Oracle DB EE required
FREE
Encryption
Not Available
FREE
Security
Not Available
FREE
Replication
Not Available
FREE
Auditing
Not Available
FREE 11. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
The gDBclone sample script takes databases from any source and duplicates them on the Test & Dev cluster using ACFS snapshots to create space efficient copies.
11
gDBclone automatically converts databases from any type to any type; quickly test your application on a RAC test database using your single instance database production data.
ACFS and a Simple and Free of Charge Approach to Managing Test & Dev Oracle Database Environments
Look for the “gDBClone Database Clone/Snapshot Management Script” and WP here: http://www.oracle.com/technetwork/indexes/samplecode/index.html 12. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
12.1.0.1: Go with Standard Cluster
12.1.0.2: Use Flex Cluster (includes Flex ASM by default)
12
New in 12.1.0.2: Recommendation to use Flex Cluster
One exception: if installing for an Extended Oracle RAC Cluster – use Standard Cluster + Flex ASM 13. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
DBCA
Despite running Leaf Nodes
13
Continue to use Leaf Nodes for Applications in 12.1.0.2
[GRID]> olsnodes -s -t
germany Active Unpinned
argentina Active Unpinned
brazil Active Unpinned
italy Active Unpinned
spain Active Unpinned
More Information in Appendix D 14. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Install what’s necessary
Configure what’s required (later)
14
New Network Flexibility in 12.1.0.2 – Recommendation
More Information in Appendix B 15. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
spain
Oracle GI | Leaf
•Server OS:
–HUBs 4GB+ memory recommended
•One HUB at a time will host GIMR database.
•Only HUBs will host (Flex) ASM instances.
•Leafs can have less memory, dependent on the use case.
•Installer enforces HUB minimum memory requirement.
–OL 6.5 UEK (other kernels are supported)
•Databases
1.“rdwh”, on all HUBs
2.“cons”, on argentina and brazil
Installation Complete – Result
brazil
argentina
germany
Oracle GI | HUB
Oracle GI | HUB
Oracle GI | HUB
Oracle RAC
Oracle RAC
italy
Oracle GI | Leaf
Oracle RAC
15 16. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
16
Automatic Diagnostic Repository (ADR)
ADR_base
diag
asm
rdbms
tnslsnr
clients
crs
(others)
More Information in Appendix C
•Oracle Grid Infrastructure now supports the Automatic Diagnostic Repository
•ADR simplifies log analysis by
•centralizing most logs under a defined folder structure.
•maintaining a history of logs.
•providing its own command line tool to manage diagnostic information. 17. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Program Agenda
1
2
3
4
New in Oracle RAC 12.1.0.2 (Install)
Operational Best Practices for
Generic Clusters
Extended Cluster
Dedicated Environments
Consolidated Environments
Appendices A – D
17
5 18. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
18
Operational Best Practices – Generic Clusters
Use Case
Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage
OS
Network
Cluster
DB 19. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
19
Generic Clusters – Storage
Use Case
Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage
Appendix A
OS
Network
Cluster
DB
Step 1: Create “GRID” Disk Group – Generic Cluster
Step 2: Move Clusterware Files
Step 3: Move ASM SPFILE / password file
More Information in Appendix A 20. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Avoid memory pressure!
Use Memory Guard
20
Use Solid State Disks (SSDs) to host swap
More Information in “My Oracle Support”
(MOS) note 1671605.1 – “Use Solid State
Disks to host swap space in order to
increase node availability”
Use HugePages for SGA (Linux)
More information in
MOS notes 361323.1 & 401749.1
Avoid Transparent HugePages (Linux6)
See alert in MOS note 1557478.1
Generic Clusters – OS / Memory
germany argentina
Oracle GI
Oracle RAC
Oracle GI
Oracle RAC
germany argentina
Oracle GI
Oracle RAC
Oracle GI
Oracle RAC
Swapping
21. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
•OraChk
–Formerly RACcheck
–A.k.a. ExaChk
•RAC Configuration Audit Tool
–Details in MOS note ID 1268927.1
•Checks “Oracle” (Databases):
–Standalone Database
–Grid Infrastructure & Oracle RAC
–Maximum Availability Architecture (MAA) Validation (if configured)
–Oracle Hardware setup configuration
21
Generic Clusters – OS / OraChk and TFA
Trace File Analyzer
More information in MOS note 1513912.1 22. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
22
Generic Clusters – OS Summary
Use Case Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage
Appendix A
OS
Memory Config + OraChk / TFA
Network
Cluster
DB 23. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Define “normal”
23
Size Interconnect for aggregated throughput
Use redundancy (HAIPs) for Load Balancing
Use different subnets for the interconnect
Use Jumbo Frames wherever possible
Ensure entire infrastructure support
Generic Clusters – Network
More Information in Appendix B
Receive()
germany argentina
8K Data
Block
1500 byte MTU
Send()
Fragmentation Reassembly
Oracle RAC Oracle RAC
24. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
• Fact: In virtual environments, certain
network components are “virtualized”.
• Consequence: Sometimes, network failures
are not reflected in the guest environment.
• Reason: OS commands run in the guest fail to detect
the network failure as the “virtual NIC” remains “up”.
• Result: corrective actions may not be performed.
• Solution: Ping Targets
24
Virtual Generic Clusters? – Use Ping Targets with 12.1.0.2
Guest
DBI
Server
APP
25. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
• Ping Targets are new in Oracle RAC 12.1.0.2
• Ping Targets use a probe to a given destination
(IP) in order to determine network availability.
• Ping Targets are used in addition to local checks
• Ping Targets are used on the public network only
• Private networks already use constant heartbeating
• Ping Targets should be chosen carefully:
• Availability of the ping target is important
• More than one target can be defined for redundancy
• Ping target failures should be meaningful
• Example: Pinging a central switch (probably needs to
be enabled) between clients and the database servers.
25
(Virtual) Generic Clusters – Use Ping Targets on Public
[GRID]> su
Password:
[GRID]> srvctl modify network -k 1 -pingtarget “<UsefulTargetIP(s)>"
[GRID]> exit
exit
[GRID]> srvctl config network -k 1
Network 1 exists
Subnet IPv4: 10.1.1.0/255.255.255.0/eth0, static
Subnet IPv6:
Ping Targets: <UsefulTargetIP(s)>
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
Guest
DBI
Server
APP
26. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
26
Generic Clusters – Network Summary
Use Case
Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage
Appendix A
OS
Memory Config + OraChk / TFA
Network
As discussed +Appendix B
Cluster
DB 27. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
27
Generic Clusters – Cluster
Use Case
Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage
Appendix A
OS
Memory Config + OraChk / TFA
Network
As discussed +Appendix B
Cluster
Appendix D
DB
1.Install / maintain HUBs, add Leaf Nodes
2.Adding nodes to the cluster
3.Use Leaf nodes for non-DB use cases 28. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Program Agenda
1
2
3
4
New in Oracle RAC 12.1.0.2 (Install)
Operational Best Practices for
Generic Clusters
Extended Cluster
Dedicated Environments
Consolidated Environments
Appendices A – D
28
5 29. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
29
Extended Oracle RAC
From an Oracle perspective, an Extended RAC installation is used as soon as data (using Oracle ASM) is mirrored between independent storage arrays. (Exadata Storage Cells are excluded from this definition.) ER: open to make "EXTENDED ORACLE RAC" A DISTINGUISHABLE CONFIGURATION 30. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
30
Extended Cluster – Storage
Use Case
Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage
Appendix A
Appendix A
OS
Memory Config + OraChk / TFA
Network
As discussed +Appendix B
Cluster
Appendix D
DB
Step 1: Create “GRID” Disk Group – Extended Cluster Step 2: Move Clusterware Files Step 3: Move ASM SPFILE / password file Step 4: “srvctl modify asm –count all” 31. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
31
Extended Cluster – OS
Use Case
Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage
Appendix A
Appendix A
OS
Memory Config + OraChk / TFA
As for Generic Clusters
Network
As discussed +Appendix B
Cluster
Appendix D
DB
More information: Oracle Real Application Clusters on Extended Distance Clusters (PDF) - http://www.oracle.com/technetwork/database/options/clustering/overview/extendedracversion11-435972.pdf 32. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Define “normal”
The goal in an Extended RAC setup is to hide the distance.
Any latency increase might (!) impact application performance.
32
VLANs are fully supported for Oracle RAC – for more information, see: http://www.oracle.com/technetwork/database/database- technologies/clusterware/overview/interconnect-vlan-06072012-1657506.pdf
Vertical subnet separation is not supported.
Extended Cluster – Network
More information: Oracle Real Application Clusters on Extended Distance Clusters (PDF) - http://www.oracle.com/technetwork/database/options/clustering/ overview/extendedracversion11-435972.pdf 33. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
33
Extended Cluster – Network Summary
Use Case
Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage
Appendix A
Appendix A
OS
Memory Config + OraChk / TFA
As for Generic Clusters
Network
As discussed +Appendix B
As discussed +Appendix B
Cluster
Appendix D
DB 34. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
34
Extended Cluster – Cluster Summary
Use Case
Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage
Appendix A
Appendix A
OS
Memory Config + OraChk / TFA
As for Generic Clusters
Network
As discussed +Appendix B
As discussed +Appendix B
Cluster
Appendix D
As Generic
DB
The goal in an Extended RAC setup is to hide the distance. 35. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Program Agenda
1
2
3
4
New in Oracle RAC 12.1.0.2 (Install)
Operational Best Practices for
Generic Clusters
Extended Cluster
Dedicated Environments
Consolidated Environments
Appendices A – D
35
5 36. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
36
Dedicated Environments – Only a few items to consider
Use Case
Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage
Appendix A
Appendix A
OS
Memory Config + OraChk / TFA
As for Generic Clusters
Network
As discussed +Appendix B
As discussed +Appendix B
Cluster
Appendix D
As Generic
DB 37. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
37
Dedicated Environments – Network
[GRID]> srvctl config scan -all
SCAN name: cupscan.cupgnsdom.localdomain, Network: 1
Subnet IPv4: 10.1.1.0/255.255.255.0/eth0, static
Subnet IPv6:
SCAN 0 IPv4 VIP: 10.1.1.55
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN name: cupscan2, Network: 2
Subnet IPv4: 10.2.2.0/255.255.255.0/, static
Subnet IPv6:
SCAN 1 IPv4 VIP: 10.2.2.55
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN on Network 1
SCAN on Network 2
More Information:
•Valid Node Checking For Registration (VNCR) (Doc ID 1600630.1)
•How to Enable VNCR on RAC Database to Register only Local Instances (Doc ID 1914282.1)
More information:
•Oracle Real Application Clusters - Overview of SCAN - http://www.oracle.com/technetwork/database/options/clustering/overview/scan-129069.pdf 38. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
38
Dedicated Environments – Network Summary
Use Case
Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage
Appendix A
Appendix A
OS
Memory Config + OraChk / TFA
As for Generic Clusters
Network
As discussed +Appendix B
As discussed +Appendix B
Appendix B + as discussed
Cluster
Appendix D
As Generic
DB 39. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Problem: Patching and Upgrades
Solution: Rapid Home Provisioning
39
Problem: Memory consumption
Solution: Memory Caps
Problem: Number of Connections
Solution: various,
mostly using connection pools
Dedicated Environments – Database (DB)
germany argentina
Connection
Pool
Oracle GI
Oracle RAC
Oracle GI
Oracle RAC
40. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
New in Oracle Database 12c:
• SGA and PGA aggregated targets can be limited.
• See documentation for “PGA_AGGREGATE_LIMIT”
40
Dedicated Environments – Database (DB)
[DB]> sqlplus / as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Thu Sep 18 18:57:30 2014
…
SQL> show parameter pga
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
pga_aggregate_limit big integer 2G
pga_aggregate_target big integer 211M
SQL> show parameter sga
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean FALSE
pre_page_sga boolean TRUE
sga_max_size big integer 636M
sga_target big integer 636M
unified_audit_sga_queue_size integer 1048576
1.Do not handle connection storms, prevent them.
2.Limit the number of connections to the database.
3.Use Connection Pools where possible:
•Oracle Universal Connection Pool (UCP) - http://docs.oracle.com/database/121/JJUCP/rac.htm#JJUCP8197
4.Ensure applications close connections.
•If number of active connections is fairly less than the number of open connections, consider using “Database Resident Connection Pooling” - docs.oracle.com/database/121/JJDBC/drcp.htm#JJDBC29023
5. If you cannot prevent the storm, slow it down.
•Use listener parameters to mitigate the negative side effects of a connection storm. Most of these parameters can also be used with SCAN.
6.Services can be assigned to one subnet at a time. You control the subnet, you control the service.
Connection Pool 41. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
41
Dedicated Environments – Database Summary
Use Case
Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage
Appendix A
Appendix A
OS
Memory Config + OraChk / TFA
As for Generic Clusters
Network
As discussed +Appendix B
As discussed +Appendix B
Appendix B + as discussed
Cluster
Appendix D
As Generic
DB
As discussed 42. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Program Agenda
1
2
3
4
New in Oracle RAC 12.1.0.2 (Install)
Operational Best Practices for
Generic Clusters
Extended Cluster
Dedicated Environments
Consolidated Environments
Appendices A – D
42
5 43. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Database Consolidation
• Multiple database instances running on a server
• Need to manage memory across instances
• Use Instance Caging and QoS (in RAC cluster)
43
Use Oracle Multitenant
• A limited number of Container DB instances to manage
• Memory allocation on the server is simplified
• Instance Caging may not be needed (QoS still beneficial)
Consolidated Environments – No VMs 2 Main Choices
germany argentina
racdb1_3
Oracle GI
Oracle RAC
Oracle GI
Oracle RAC
brazil
germany argentina
Oracle GI | HUB Oracle GI | HUB
Oracle GI | HUB
Oracle RAC Oracle RAC
italy
Oracle GI | HUB
Oracle RAC
cons
Oracle RAC
cons1_2
cons1_1
44. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
More information:
•http://www.oracle.com/technetwork/database/focus-areas/
database-cloud/database-cons-best-practices-1561461.pdf
•http://www.oracle.com/technetwork/database/options/cluste
ring/overview/rac-cloud-consolidation-1928888.pdf
44
Use Oracle Multitenant
• Can be operated as a Dedicated Environment,
• at least from the cluster perspective,
• if only 1 Container Database Instance per server is used
Consolidated Environments – Make them Dedicated …
germany argentina
racdb1_3
Oracle GI
Oracle RAC
Oracle GI
Oracle RAC
brazil
germany argentina
Oracle GI | HUB Oracle GI | HUB
Oracle GI | HUB
Oracle RAC Oracle RAC
italy
Oracle GI | HUB
Oracle RAC
cons
Oracle RAC
cons1_2
cons1_1
45. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
45
Consolidated Environments – Network Summary
Use Case
Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage
Appendix A
Appendix A
OS
Memory Config + OraChk / TFA
As for Generic Clusters
Network
As discussed +Appendix B
As discussed +Appendix B
Appendix B + as discussed
As dedicated + as discussed
Cluster
Appendix D
As Generic
DB
As discussed 46. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
46
Consolidated Environments – Database (DB) Summary
Use Case
Area
Generic Clusters
Extended Cluster
Dedicated (OLTP / DWH)
Consolidated Environments
Storage
Appendix A
Appendix A
OS
Memory Config + OraChk / TFA
As for Generic Clusters
Network
As discussed +Appendix B
As discussed +Appendix B
Appendix B + as discussed
As dedicated + as discussed
Cluster
Appendix D
As Generic
DB
As discussed
As above
Specifically for Oracle Multitenant on Oracle RAC, see: http://www.slideshare.net/MarkusMichalewicz/oracle- multitenant-meets-oracle-rac-ioug-2014-version 47. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Appendix A
Creating “GRID” disk group to place the Oracle Clusterware files and the ASM files
Oracle Confidential – Internal/Restricted/Highly Restricted
47 48. Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 48
Create “GRID” Disk Group – Generic Cluster
Use
“quorum”
whenever
possible.
49. Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 49
Create “GRID” Disk Group – Extended Cluster
• More information:
http://www.oracle.com/technetwork/d
atabase/options/clustering/overview/e
xtendedracversion11-435972.pdf
• Use logical names
illustrating the disk
destination
• Use a quorum for
ALL (not only GRID)
disk groups used in
an ExtendedCluster
• Use Voting
Disk NFS
destination
50. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Replace Voting Disk Location
Add OCR Location
50
Move Clusterware Files
[GRID]> crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 8bec21793ee84fd3bfc6831746bf60b4 (/dev/sde) [GIMR]
Located 1 voting disk(s).
[GRID]> crsctl replace votedisk +GRID
Successful addition of voting disk 7a205a2588d44f1dbffb10fc91ecd334.
Successful addition of voting disk 8c05b220cfcc4f6fbf5752b6763a18ac.
Successful addition of voting disk 223006a9c28e4fd5bf3b58a465fcb66a.
Successful deletion of voting disk 8bec21793ee84fd3bfc6831746bf60b4.
Successfully replaced voting disk group with +GRID.
CRS-4266: Voting file(s) successfully replaced
[GRID]> crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 7a205a2588d44f1dbffb10fc91ecd334 (/dev/sdd) [GRID]
2. ONLINE 8c05b220cfcc4f6fbf5752b6763a18ac (/dev/sdb) [GRID]
3. ONLINE 223006a9c28e4fd5bf3b58a465fcb66a (/dev/sdc) [GRID]
Located 3 voting disk(s).
[GRID]> whoami
Root
[GRID]> ocrconfig -add +GRID
[GRID]> ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 2984
Available space (kbytes) : 406584
ID : 759001629
Device/File Name : +GIMR
Device/File integrity check succeeded
Device/File Name : +GRID
Device/File integrity check succeeded
Device/File not configured
...
Cluster registry integrity check succeeded
Logical corruption check succeeded
Use “ocrconfig -delete +GIMR” if you want to “replace” and maintain a single OCR location. 51. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Default ASM spfile location is in the first disk group created (here: GIMR)
Perform a rolling ASM instance restart facilitating Flex ASM
51
Move ASM SPFILE – See also MOS note 1638177.1
[GRID]> export ORACLE_SID=+ASM1
[GRID]> sqlplus / as sysasm
…
SQL> show parameter spfile
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
Spfile string +GIMR/cup-cluster/ASMPARAMETER
FILE/registry.253.857666347
#Change location
SQL> create pfile='/tmp/ASM.pfile' from spfile;
File created.
SQL> create spfile='+GRID' from pfile='/tmp/ASM.pfile';
File created.
#NOTE:
SQL> show parameter spfile
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
Spfile string +GIMR/cup-cluster/ASMPARAMETER
FILE/registry.253.857666347
Use “gpnptool get” and filter for “ASMPARAMETERFILE” to see updated ASM SPFILE location in GPnP profile prior to restarting.
[GRID]> srvctl status asm
ASM is running on argentina,brazil,germany
[GRID]> srvctl stop asm -n germany -f
[GRID]> srvctl status asm -n germany
ASM is not running on germany
[GRID]> srvctl start asm -n germany
[GRID]> srvctl status asm -n germany
ASM is running on germany
[GRID]> crsctl stat res ora.mgmtdb
NAME=ora.mgmtdb
TYPE=ora.mgmtdb.type
TARGET=ONLINE
STATE=ONLINE on argentina
Perform rolling through cluster.
12c DB instances remain running! 52. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Default ASM shared password file location is the same as for the SPFILE (here +GIMR)
Path-checking while moving the file (online operation)
52
Move ASM Password File
[GRID]> srvctl config ASM
ASM home: <CRS home>
Password file: +GIMR/orapwASM
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM
GRID]> srvctl modify asm -pwfile +GRID/orapwASM
[GRID]> srvctl config ASM
ASM home: <CRS home>
Password file: +GRID/orapwASM
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM
[GRID]> srvctl modify asm -pwfile GRID
[GRID]> srvctl config ASM
ASM home: <CRS home>
Password file: GRID
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM
[GRID]> srvctl modify asm -pwfile +GRID
PRKO-3270 : The specified password file +GRID does not conform to an ASM path syntax
Use the correct ASM path syntax! 53. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Appendix B
Creating public and private (DHCP-based) networks including SCAN and SCAN Listeners
53 54. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Step 1: Add network
Result
54
Add Public Network – DHCP
[GRID]> oifcfg iflist
eth0 10.1.1.0
eth1 10.2.2.0
eth2 192.168.0.0
eth2 169.254.0.0
[GRID]> oifcfg setif -global "*"/10.2.2.0:public
[GRID]> oifcfg getif
eth0 10.1.1.0 global public
eth2 192.168.0.0 global cluster_interconnect,asm
* 10.2.2.0 global public
Only in OCR: eth1 10.2.2.0 global public
PRIF-29: Warning: wildcard in network parameters can cause mismatch among GPnP profile, OCR, and system.
[GRID]> su
Password:
[GRID]> srvctl add network -netnum 2 -subnet 10.2.2.0/255.255.255.0 -nettype dhcp
[GRID]> exit
exit
[GRID]> srvctl config network -k 2
Network 2 exists
Subnet IPv4: 10.2.2.0/255.255.255.0/, dhcp
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
[GRID]> crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
…
ora.net2.network
OFFLINE OFFLINE argentina STABLE
OFFLINE OFFLINE brazil STABLE
OFFLINE OFFLINE germany STABLE
… 55. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Step 2: Add SCAN / SCAN_LISTENER to the new network (as required)
Result
55
Add Public Network – DHCP
[GRID]> su
Password:
[GRID]> srvctl update gns -advertise MyScan -address 10.2.2.20
# Need to have a SCAN name. DHCP network requires dynamic VIP resolution via GNS
[GRID]> srvctl modify gns -verify MyScan
The name "MyScan" is advertised through GNS.
[GRID]> srvctl add scan -k 2
PRKO-2082 : Missing mandatory option –scanname
[GRID]> su
Password:
[GRID]> srvctl add scan -k 2 -scanname MyScan
[GRID]> exit
[GRID]> srvctl add scan_listener -k 2
[GRID]> srvctl config scan -k 2
SCAN name: MyScan.cupgnsdom.localdomain, Network: 2
Subnet IPv4: 10.2.2.0/255.255.255.0/, dhcp
Subnet IPv6:
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
[GRID]> srvctl config scan_listener -k 2
SCAN Listener LISTENER_SCAN1_NET2 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER_SCAN2_NET2 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes:
SCAN Listener LISTENER_SCAN3_NET2 exists. Port: TCP:1521
Registration invited nodes:
Registration invited subnets:
SCAN Listener is enabled.
SCAN Listener is individually enabled on nodes:
SCAN Listener is individually disabled on nodes: 56. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
oifcfg commands
Result (ifconfig -a on HUB)
56
Add Private Network – DHCP
[GRID]> oifcfg iflist
eth0 10.1.1.0
eth1 10.2.2.0
eth2 192.168.0.0
eth2 169.254.0.0
eth3 172.149.0.0
[GRID]> oifcfg getif
eth0 10.1.1.0 global public
eth2 192.168.0.0 global cluster_interconnect,asm
* 10.2.2.0 global public
Only in OCR: eth1 10.2.2.0 global public
PRIF-29: Warning: wildcard in network parameters can cause mismatch among GPnP profile, OCR, and system.
[GRID]> oifcfg setif -global "*"/172.149.0.0:cluster_interconnect,asm
[GRID]> oifcfg getif
eth0 10.1.1.0 global public
eth2 192.168.0.0 global cluster_interconnect,asm
* 10.2.2.0 global public
* 172.149.0.0 global cluster_interconnect,asm
PRIF-29: Warning: wildcard in network parameters can cause mismatch among GPnP profile, OCR, and system.
BEFORE
eth3 Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE
inet addr:172.149.2.7 Bcast:172.149.15.255 Mask:255.255.240.0
inet6 addr: fe80::a00:27ff:fe1e:2bfe/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:52 errors:0 dropped:0 overruns:0 frame:0
TX packets:17 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:20974 (20.4 KiB) TX bytes:4230 (4.1 KiB)
AFTER
eth3 Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE
inet addr:172.149.2.7 Bcast:172.149.15.255 Mask:255.255.240.0
inet6 addr: fe80::a00:27ff:fe1e:2bfe/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1161 errors:0 dropped:0 overruns:0 frame:0
TX packets:864 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:720040 (703.1 KiB) TX bytes:500289 (488.5 KiB)
eth3:1 Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE
inet addr:169.254.245.67 Bcast:169.254.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
HAIPs will only be used for Load Balancing once at least the DB / ASM instances, of not the node is restarted. They are considered for failover immediately. 57. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
ifconfig -a on HUB – excerpt
ifconfig -a on Leaf – excerpt
57
Side note: Leaf Nodes don’t host HAIPs!
eth2 Link encap:Ethernet HWaddr 08:00:27:AD:DC:FD
inet addr:192.168.7.11 Bcast:192.168.15.255 Mask:255.255.240.0
inet6 addr: fe80::a00:27ff:fead:dcfd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:9303 errors:0 dropped:0 overruns:0 frame:0
TX packets:6112 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:8344479 (7.9 MiB) TX bytes:2400797 (2.2 MiB)
eth2:1 Link encap:Ethernet HWaddr 08:00:27:AD:DC:FD
inet addr:169.254.190.250 Bcast:169.254.255.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
eth3 Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE
inet addr:172.149.2.5 Bcast:172.149.15.255 Mask:255.255.240.0
inet6 addr: fe80::a00:27ff:fe1e:2bfe/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:4729 errors:0 dropped:0 overruns:0 frame:0
TX packets:5195 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1555796 (1.4 MiB) TX bytes:2128607 (2.0 MiB)
eth3:1 Link encap:Ethernet HWaddr 08:00:27:1E:2B:FE
inet addr:169.254.6.142 Bcast:169.254.127.255 Mask:255.255.128.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
eth2 Link encap:Ethernet HWaddr 08:00:27:CC:98:C3
inet addr:192.168.7.15 Bcast:192.168.15.255 Mask:255.255.240.0
inet6 addr: fe80::a00:27ff:fecc:98c3/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:7218 errors:0 dropped:0 overruns:0 frame:0
TX packets:11354 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2644101 (2.5 MiB) TX bytes:13979129 (13.3 MiB)
eth3 Link encap:Ethernet HWaddr 08:00:27:06:D5:93
inet addr:172.149.2.6 Bcast:172.149.15.255 Mask:255.255.240.0
inet6 addr: fe80::a00:27ff:fe06:d593/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:6074 errors:0 dropped:0 overruns:0 frame:0
TX packets:5591 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2262521 (2.1 MiB) TX bytes:1680094 (1.6 MiB)
HAIPs on the interconnect are only used by ASM / DB instances. Leaf nodes do not host those, hence, they do not host HAIPs. CSSD (the node management daemon) uses a different redundancy approach. 58. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Step 1: Add network
Result
58
Add Public Network – STATIC
[GRID]> oifcfg iflist
eth0 10.1.1.0
eth1 10.2.2.0
eth2 192.168.0.0
eth2 169.254.128.0
eth3 172.149.0.0
eth3 169.254.0.0
#Assuming you have NO global public interface defined on subnet 10.2.2.0
[GRID]> oifcfg setif -global "*"/10.2.2.0:public
[GRID]> oifcfg getif
eth0 10.1.1.0 global public
eth2 192.168.0.0 global cluster_interconnect,asm
* 172.149.0.0 global cluster_interconnect,asm
* 10.2.2.0 global public
PRIF-29: Warning: wildcard in network parameters can cause mismatch among GPnP profile, OCR, and system.
[GRID]> su
Password:
[GRID]> srvctl add network -netnum 2 -subnet 10.2.2.0/255.255.255.0 -nettype STATIC
[GRID]> srvctl config network -k 2
Network 2 exists
Subnet IPv4: 10.2.2.0/255.255.255.0/, static
Subnet IPv6:
Ping Targets:
Network is enabled
Network is individually enabled on nodes:
Network is individually disabled on nodes:
[GRID]> crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
…
ora.net2.network
OFFLINE OFFLINE argentina STABLE
OFFLINE OFFLINE brazil STABLE
OFFLINE OFFLINE germany STABLE
… 59. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Step 2: Add VIPs
Result
59
Add Public Network – STATIC
[GRID]> srvctl add vip -node germany -address germany-vip2/255.255.255.0 -netnum 2
[GRID]> srvctl add vip -node argentina -address argentina-vip2/255.255.255.0 -netnum 2
[GRID]> srvctl add vip -node brazil -address brazil-vip2/255.255.255.0 -netnum 2
[GRID]> srvctl config vip -n germany
VIP exists: network number 1, hosting node germany
VIP Name: germany-vip
VIP IPv4 Address: 10.1.1.31
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
VIP exists: network number 2, hosting node germany
VIP Name: germany-vip2
VIP IPv4 Address: 10.2.2.31
VIP IPv6 Address:
VIP is enabled.
VIP is individually enabled on nodes:
VIP is individually disabled on nodes:
[GRID]> srvctl start vip -n germany -k 2
[GRID]> srvctl start vip -n argentina -k 2
[GRID]> srvctl start vip -n brazil -k 2
[GRID]> srvctl status vip -n germany
VIP germany-vip is enabled
VIP germany-vip is running on node: germany
VIP germany-vip2 is enabled
VIP germany-vip2 is running on node: germany
[GRID]> srvctl status vip -n argentina
VIP argentina-vip is enabled
VIP argentina-vip is running on node: argentina
VIP argentina-vip2 is enabled
VIP argentina-vip2 is running on node: argentina
[GRID]> srvctl status vip -n brazil
VIP brazil-vip is enabled
VIP brazil-vip is running on node: brazil
VIP brazil-vip2 is enabled
VIP brazil-vip2 is running on node: brazil 60. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Step 3: Add SCAN / SCAN_LISTENER to the new network (as required)
Result
60
Add Public Network – STATIC
#as root
[GRID]> srvctl add scan -scanname cupscan2 -k 2
[GRID]> exit
[GRID]> srvctl add scan_listener -k 2 -endpoints 1522
[GRID]> srvctl status scan_listener -k 2
SCAN Listener LISTENER_SCAN1_NET2 is enabled
SCAN listener LISTENER_SCAN1_NET2 is not running
[GRID]> srvctl start scan_listener -k 2
[GRID]> srvctl status scan_listener -k 2
SCAN Listener LISTENER_SCAN1_NET2 is enabled
SCAN listener LISTENER_SCAN1_NET2 is running on node brazil
[GRID]> srvctl status scan -k 2
SCAN VIP scan1_net2 is enabled
SCAN VIP scan1_net2 is running on node brazil 61. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Appendix C
Automatic Diagnostic Repository (ADR) support for Oracle Grid Infrastructure
61 62. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
• The ADR is a file-based repository for diagnostic data such as traces, dumps, the alert log, health monitor reports, and more.
• ADR helps preventing, detecting, diagnosing, and resolving problems.
• ADR comes with its own command line tool (adrci) to get easy access to and manage diagnostic information for Oracle GI + DB.
62
Automatic Diagnostic Repository (ADR) Convenience
ADR_base
diag
asm
rdbms
tnslsnr
clients
crs
(others) 63. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
adrci
adrci incident management
63
Some Management Examples
[GRID]> adrci
ADRCI: Release 12.1.0.2.0 - Production on Thu Sep 18 11:35:31 2014
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
ADR base = "/u01/app/grid“
adrci> show homes
ADR Homes:
diag/rdbms/_mgmtdb/-MGMTDB
diag/tnslsnr/germany/asmnet1lsnr_asm
diag/tnslsnr/germany/listener_scan1
diag/tnslsnr/germany/listener
diag/tnslsnr/germany/mgmtlsnr
diag/asm/+asm/+ASM1
diag/crs/germany/crs
diag/clients/user_grid/host_2998292599_82
diag/clients/user_oracle/host_2998292599_82
diag/clients/user_root/host_2998292599_82
[GRID]> adrci
ADR base = "/u01/app/grid"
…
adrci> show incident;
ADR Home = /u01/app/grid/diag/rdbms/_mgmtdb/-MGMTDB:
*************************************************************************
INCIDENT_ID PROBLEM_KEY CREATE_TIME
-------------------- ----------------------------------------------------------- ----------------------------------------
12073 ORA 700 [kskvmstatact: excessive swapping observed] 2014-09-08 17:44:56.580000 -07:00
36081 ORA 700 [kskvmstatact: excessive swapping observed] 2014-09-14 20:11:17.388000 -07:00
40881 ORA 700 [kskvmstatact: excessive swapping observed] 2014-09-16 15:30:18.319000 -07:00
…
adrci> set home diag/rdbms/_mgmtdb/-MGMTDB
adrci> ips create package incident 12073;
Created package 1 based on incident id 12073, correlation level typical
adrci> ips generate package 1 in /tmp
Generated package 1 in file /tmp/ORA700ksk_20140918110411_COM_1.zip, mode complete
[GRID]> ls –lart /tmp
-rw-r--r--. 1 grid oinstall 811806 Sep 18 11:05 ORA700ksk_20140918110411_COM_1.zip 64. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Binary / Log per Node
Space Requirement
Grid Infra. (GI) Home
~6.6 GB
RAC DB Home
~5.5 GB
TFA Repository
10 GB
GI Daemon Traces
~2.6 GB
ASM Traces
~9 GB
DB Traces
1.5 GB per DB per month
Listener Traces
60MB per node per month
Total over 3 months
• For 2 RAC DBs
• For 100 RAC DBs
• ~43 GB
•~483 GB
•Flex ASM vs. Standard ASM Flex Cluster vs. Standard Cluster
–Does not make a difference for ADR!
64
Space Requirements, Exceptions, and Rules
gnsd
ocssd
ocssdrim
havip
exportfs
NFS
helper
hanfs
ghc
ghs
mgmtdb
agent
APX
gns
Some OC4J Logs
Some GI home Logs 65. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Appendix D
Flex Cluster – add nodes as needed
65 66. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Initial installation: HUB nodes only
Add Leafs later (addNode)
66
Recommendation: Install HUB Nodes, Add Leaf Nodes 67. Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 67
Add “argentina” as a HUB Node – addNode Part 1
68. Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 68
Add “argentina” as a HUB Node – addNode Part 2
69. Copyright © 2014, Oracle and/or its affiliates. All rights reserved. 69
Add Leaf Nodes – addNode in Short
Note: Leaf nodes do not
require a virtual node
name (VIP). Application
VIPs for non-DB use
cases need to be added
manually later.
Normal, can
be ignored.
70. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Database installer suggestion
Consider Use Case
70
Continue to use Leaf Nodes for Applications in 12.1.0.2
Useful, if “spain” is likely to become a HUB at some point in time. 71. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
DBCA
Despite running Leaf Nodes
71
Continue to use Leaf Nodes for Applications in 12.1.0.2
[GRID]> olsnodes -s -t
germany Active Unpinned
argentina Active Unpinned
brazil Active Unpinned
italy Active Unpinned
spain Active Unpinned 72. Copyright © 2014, Oracle and/or its affiliates. All rights reserved.
Leaf Listener (OFFLINE/OFFLINE)
Trace File Analyzer (TFA)
72
Some Examples of Resources running on Leaf Nodes
[grid@spain Desktop]$ . grid_profile
[GRID]> crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE argentina STABLE
ONLINE ONLINE brazil STABLE
ONLINE ONLINE germany STABLE
…
ora.LISTENER.lsnr
ONLINE ONLINE argentina STABLE
ONLINE ONLINE brazil STABLE
ONLINE ONLINE germany STABLE
ora.LISTENER_LEAF.lsnr
OFFLINE OFFLINE italy STABLE
OFFLINE OFFLINE spain STABLE
ora.net1.network
ONLINE ONLINE argentina STABLE
ONLINE ONLINE brazil STABLE
ONLINE ONLINE germany STABLE
[GRID]> ps -ef |grep grid_1
root 1431 1 0 14:12 ? 00:00:19 /u01/app/12.1.0/grid_1/jdk/jre/bin/java -Xms128m -Xmx512m -classpath /u01/app/12.1.0/grid_1/tfa/spain/tfa_home/jlib/RATFA.jar:/u01/app/12.1.0/grid_1/tfa/spain/tfa_home/jlib/je- 5.0.84.jar:/u01/app/12.1.0/grid_1/tfa/spain/tfa_home/jlib/ojdbc6.jar:/u01/app/12.1.0/grid_1/tfa/spain/tfa_home/jlib/commons-io-2.2.jar oracle.rat.tfa.TFAMain /u01/app/12.1.0/grid_1/tfa/spain/tfa_home