Sunday, February 13, 2011

RAC:9i-10g-11g


Oracle RAC: 9i vs 10g and 11gR2
Here are the configurational differences of 9i,10g and 11gR2 rac …
Section 2 explains the points mentioned here :


9i
10g
11gR2
1
Oracle Real Application Clusters (RAC), introduced with Oracle9i in 2001, supersedes the Oracle Parallel Server (OPS) database option.
Oracle9i Real Application Clusters Guard Release 9.0.1.3
is derived from
Oracle Parallel Fail Safe 8.1.7.2
10g introduces CRS
11gR2 brings scan-ip
2
Third party clusterware
When using Oracle 10g or higher, Oracle Clusterware is the only clusterware that you need for most platforms on which Oracle RAC operates (except for Tru cluster, in which case you need vendor clusterware). You can still use clusterware from other vendors if the clusterware is certified for Oracle RAC.
as 10g
3
The  setup utility (PFSSETUP) is used to configure Oracle9i Real Application Clusters Guard:
crs
crs
4
GSD
1. Cluster Synchronization Services (CSS)
2. Cluster Ready Services (CRS)
3. Event Management (EVM)
4. Oracle Notification Service (ONS)
5. RACG
6. Process Monitor Daemon (OPROCD)
Same as 10g
5
In Oracle9i, the OCR did not write HARD-compatible blocks. If the device used by
OCR is enabled for HARD, then use the method described in the HARD white paper
to disable HARD for the OCR before downgrading your OCR. If you do not disable
HARD, then the downgrade operation fails.
 write HARD-compatible blocks
Same as 10g
6
Cluster Manager
In Oracle Database 10g
release (10.1), Cluster Ready Services (CRS) serves as the clusterware software,
and Cluster Synchronization Services (CSS) is the cluster manager software for all
platforms. The Oracle Cluster Synchronization Service Daemon (OCSSD) performs
some of the clusterware functions on UNIX-based systems. On Windows-based
systems, OracleCSService, OracleCRService, and OracleEVMService replace the
Oracle Database OracleCMService9i

7
Relocatable IP
VIP
Scanip:• SCAN IPs are different from VIPs or static IPs , they are actually public by meaning, primary purpose of SCAN IP - to be used as a single entry point to access the cluster

Oracle Real Application Clusters (RAC), introduced with Oracle9i in 2001, supersedes the Oracle Parallel Server (OPS) database option.
Oracle9i Real Application Clusters Guard Release 9.0.1.3
is derived from
Oracle Parallel Fail Safe 8.1.7.2
Whereas Oracle9i required an external clusterware (known as vendor clusterware like Veritas or Sun Cluster) for most of the Unix flavors (except for Linux and Windows where Oracle provided free clusterware called Cluster Ready Services or CRS), as of Oracle 10g, Oracle's clusterware product was available for all operating systems.
With the release of Oracle Database 10g Release 2 (10.2), Cluster Ready Services was renamed to Oracle Clusterware.
When using Oracle 10g or higher, Oracle Clusterware is the only clusterware that you need for most platforms on which Oracle RAC operates (except for Tru cluster, in which case you need vendor clusterware). You can still use clusterware from other vendors if the clusterware is certified for Oracle RAC.

Each hardware vendor implements cluster database processing by using Operating
System-Dependent (OSD) Layers. These layers provide communication links
between the Operating System and the Real Application Clusters software.

The  setup utility (PFSSETUP) is used to configure Oracle9i Real Application Clusters Guard: This utility assists with the generation of appropriate Oracle Real Application Clusters Guard files for the specified environment and simplify the configuration.
In 9i RAC we had the concept of Primary node and that owns the “Reloadable IP”.
Command line in 9i RAC : PSFCTL commands, eg
PSFCTL> pdfboot - - this starts the cluster guard

$ORACLE_HOME/pfs/setup directory and run the PFSSETUP
utility:
$ cd $ORACLE_HOME/pfs/setup
$ ./pfssetup
The PFSSETUP utility version information is displayed:
PFS_SETUP for Solaris: Version 9.2.0.1.0 on Mon Mar 19 10:35:53 PST 2002
(c) Copyright 2002 Oracle Corporation. All rights reserved.
To create all of the Oracle9i Real Application Clusters Guard setup files, enter
option 6 at the prompt.
The following menu appears:
Choose an operation on the selected files:
1] Generate only
2] Deploy only
3] Generate and deploy
4] Deinstall
5] List the affected files
6] Return to Main Menu
PFS_SETUP>
To generate and deploy the setup files, enter option 3 at the prompt.
The list of affected files appears:
The list of affected files is
PFS_SALES.RUN
PFS_SALES.HALT
PFS_SALES.MONSTART
PFS_SALES.MONSTOP
PFS_SALES_User.def
PFS_SALES_System.def
listener.ora.ded.pfs
tnsnames.ora.ded.pfs
tnsnames.ora.ded.clnt.pfs

Sample Listener Configuration for Primary and Secondary Nodes
SALES_hosta_LSNR=
(DESCRIPTION= (ADDRESS=(PROTOCOL=TCP)(HOST=192.10.1.21)(PORT=2024)(QUEUESIZE=1024)))
STARTUP_WAIT_TIME_SALES_hosta_LSNR=0
CONNECT_TIMEOUT_SALES_hosta_LSNR=10
#TRACE_LEVEL_SALES_hosta_LSNR=SUPPORT
SALES_hostb_LSNR=
(DESCRIPTION= (ADDRESS=(PROTOCOL=TCP)(HOST=192.10.1.22) (PORT=2024)(QUEUESIZE=1024)))
STARTUP_WAIT_TIME_SALES_hostb_LSNR=0
CONNECT_TIMEOUT_SALES_hostb_LSNR=10
#TRACE_LEVEL_SALES_hostb_LSNR=SUPPORT

The Cluster Manager and Node Monitor oracm accepts registration of Oracle instances to the cluster and it sends ping messages to Cluster Managers (Node Monitor) on other RAC nodes. If this heartbeat fails, oracm uses a quorum file or a quorum partition on the shared disk to distinguish between a node failure and a network failure. So if a node stops sending ping messages, but continues writing to the quorum file or partition, then the other Cluster Managers can recognize it as a network failure. The Cluster Manager (CM) uses now UDP instead of TCP for communication.



After Oracle Real Application Clusters Guard fails over the primary instance role and the user restores the secondary instance role, the system is resilient and the Packs are on their home nodes, but the instance roles are reversed. If you want the primary instance to run on the preferred primary node, then you must use the MOVE_PRIMARY and RESTORE commands.

The function of GSD (10g and above) is to service requests for 9i RAC management clients and therefore when there are no 9i databases present, there is nothing for GSD to do. Consequently, there will be no impact on a RAC cluster if GSD is offline and 9i is not used.
If gsd fails to start due to whetever reasons then best thing is to work with Oracle support to analyze and fix the issue. Until that time, gsd can be temporarily disabled.
In 11.2 GSD is disabled by default and the service will show as target:offline, status:offline.
Disable GSD (pre 11.2)
After confirming that there are no 9i databases being used you can disable GSD by adding 'exit 0' after the initial comments in the script $ORACLE_HOME/bin/gsdctl where $ORACLE_HOME is the home from which nodeapps are running (i.e. crs home).
#case $ORACLE_HOME in 
# "") echo "****ORACLE_HOME environment variable not set!" 
# echo " ORACLE_HOME should be set to the main" 
# echo " directory that contains Oracle products." 
# echo " Set and export ORACLE_HOME, then re-run." 
# exit 1;; 
#esac 
exit 0 ## Manually added as a temporary workaround 
A backup of the original script should be made before making the above change.

Disable GSD (11.2)
You may want to disable GSD after you upgraded all your Oracle9i RAC databases.
srvctl stop nodeapps
srvctl disable nodeapps -g
srvctl start nodeapps

Enable GSD in 11.2
srvctl enable nodeapps -g
srvctl start nodeapps
  • 10g uses the CRS (cluster ready services) which clumps together GSD, ORACM and many other daemons from 9i into the CRS set of daemons.
     
  • 10g uses the concept of a VIP (Virtual IP address) for failover and other aspects of the network environment.


10g RAC:


Oracle Clusterware is a portable cluster management solution that is integrated with
the Oracle database. The Oracle Clusterware is also a required component for using
RAC. In addition, Oracle Clusterware enables both single-instance Oracle databases
and RAC databases to use the Oracle high availability infrastructure. The Oracle
Clusterware enables you to create a clustered pool of storage to be used by any
combination of single-instance and RAC databases.
Oracle Clusterware is the only clusterware that you need for most platforms on which
RAC operates. You can also use clusterware from other vendors if the clusterware is
certified for RAC.

Oracle recommends that you configure a redundant interconnect to prevent the
interconnect from being a single point of failure. Oracle also recommends that you use
User Datagram Protocol (UDP) on a Gigabit Ethernet for your cluster interconnect.
Crossover cables are not supported for use with Oracle Clusterware or RAC databases.

Some of the major Oracle Clusterware components

  1. Cluster Synchronization Services (CSS)
  2. Cluster Ready Services (CRS)
  3. Event Management (EVM)
  4. Oracle Notification Service (ONS)
  5. RACG
  6. Process Monitor Daemon (OPROCD)

Oracle Clusterware Processes on UNIX-Based Systems
crsd—Performs high availability recovery and management operations such as
maintaining the OCR and managing application resources. This process runs as
the root user, or by a user in the admin group on Mac OS X-based systems. This
process restarts automatically upon failure.
evmd—Event manager daemon. This process also starts the racgevt process to
manage FAN server callouts.
ocssd—Manages cluster node membership and runs as the oracle user; failure
of this process results in cluster restart.
oprocd—Process monitor for the cluster. Note that this process only appears on
platforms that do not use vendor clusterware with Oracle Clusterware.

Other high availability components include node resources such as the Virtual Internet
Protocol (VIP) address, the Global Services Daemon, the Oracle Notification Service,
and the Oracle Net Listeners. These resources are automatically started when Oracle
Clusterware starts the node and then automatically restarts them if they stop. The
application level resources are the instances and the Oracle Clusterware background
processes that run on each instance.
You can use the VIPCA to administer VIP addresses and you can use SRVCTL to
administer other node resources. The information that describes the configuration of
these components is stored in the Oracle Cluster Registry (OCR) that you can
administer

(*) In Oracle9i, the OCR did not write HARD-compatible blocks. If the device used by
OCR is enabled for HARD, then use the method described in the HARD white paper
to disable HARD for the OCR before downgrading your OCR. If you do not disable
HARD, then the downgrade operation fails.

(**)In earlier releases of the Oracle Database, cluster manager implementations on
some platforms were referred to as "Cluster Manager". In Oracle Database 10g
release (10.1), Cluster Ready Services (CRS) serves as the clusterware software,
and Cluster Synchronization Services (CSS) is the cluster manager software for all
platforms. The Oracle Cluster Synchronization Service Daemon (OCSSD) performs
some of the clusterware functions on UNIX-based systems. On Windows-based
systems, OracleCSService, OracleCRService, and OracleEVMService replace the
Oracle Database OracleCMService9i




Managing a 9i RAC Database on 10g Clusterware
The key to managing a 9i database after the 10g CRS has been installed is to use the 9i version of srvctl, sqlplus, rman, etc. to start, stop and maintain the 9i instances.  
Use the racenv script to ensure the correct version of a given application is used to manage a given database, and use the Linux which command to double check that the correct version is accessed for a given database. Switching from a 9i database to a 10g database is just a matter of using racenv to change the environmental variables. It is really that simple!
The 10g version of gsd will provide service to the 9i srvctl just fine. The 9i version of the gsdctl script was intentionally disabled in Chapter 8 because it should not be used to manage the 10g gsd service.
It is extremely easy to register a 9i database with a 10g listener. Simply start it up with the 9i version of srvctl, and use the 10g version of lsnrctl status LISTENER_ to check the service.
When starting the 9i database, the error message PRKP-1040 Failed to get the status of the listeners associated with instance should be ignored! The database will start normally and will be served by the 10g listener even though this message may appear.
The tnsnames.ora file in the 9i oracle home will serve the 9i clients that need to connect to 9i instances. Edit the 9i tnsnames.ora file so that the host names read vip-oracle1and vip-oracle2. There is no need to edit the tnsnames.ora file in the 10g oracle home.

11gR2 RAC

According to reliable sources on the web, SCAN provides a single domain name via DNS), allowing and-users to address a RAC cluster as-if it were a single IP address. SCAN works by replacing a hostname or IP list with virtual IP addresses (VIP).  SCAN is part of the 11g release 2 movement toward "RAC Virtualization".  Virtualization is great for some RAC shops, not so good for others.
SCAN is an automatic load balancing tool that uses a relatively primitive least-recently-loaded algorithm.  Most Fortune 50 mission critical RAC systems will not use an automated load balancer in favor of intelligent RAC load balancing., where you direct like-minded transactions to like-minded nodes.  This approach greatly reduces the load on the cache fusion payer because less blocks must be sent across the RAC interconnect.
According to Oracle, there are two benefits for SCAN:
  • Fast RAC failover:  If a node fails, Oracle detects the loss of connection to the VIP and redirects new connects to the surviving VIP's.  This is an alternative to the transparent application failover. (TAF) for automatic load balancing. 
  • Easier maintenance for Grid RAC systems:  For Grid systems that gen-in and gen-out blade servers frequently, SCAN offers easier change control for the RAC DBA.  As RAC nodes are added or deleted, the DBA does not have to change the configuration files to reflect the current list of RAC node IP addresses (or hostnames).  In a nutshell, SCAN allows a single cluster alias for all instances in the cluster.
  • SCAN IPs are different from VIPs or static IPs , they are actually public by meaning, primary purpose of SCAN IP - to be used as a single entry point to access the cluster


1 comment:

  1. Oracle Real Application Cluster [10g/11 g R2 RAC]
    www.21cssindia.com/courses_view.html?id=1‎
    Oracle-Application Online Training, Oracle-Application training, Oracle-Application course contents, Oracle-Application , call us: +919000444287 ...

    ReplyDelete