Tuesday, July 9, 2013

Approach Document for Analyzing the Batch Processing Performance Issues








Approach Document for Analyzing the Batch Processing Performance Issues

 







1.     Difference among the editions of Oracle databases are below.
Need to check which edition is being used with how many Sockets and RAM.

Maximum
1 CPU
2 Sockets
4 Sockets
No Limit
RAM
1GB
OS Max
OS Max
OS Max
Database Size
4GB
No Limit
No Limit
No Limit




2.     Need to check the patch level.
3.     For understanding the behavior of the database AWR report with a time gap of 20 to 30 mins between two snaps are needed during the batch.
4.     Need to investigate the Oracle alert log for trace and errors – if any.
5.     Sometimes batch jobs need more I/O which the h/w could not provide. Work Around: To break a big batch to smaller batched by changing the data range.
We need to identify if that is the case or not.
     6. Batch jobs have different needs than OLTP transactions, For example, an OLTP database may need a large db_cache_size value to support thousands of concurrent transactions, while a batch job (doing parallel full-table scans) does not need a large db_cache_size. Conversely, OLTP jobs may not need large RAM areas for sorting ad hash joins, while batch jobs many benefit greatly from additional RAM, set via the pga_aggregate_target parameter.
Need to investigate the tables used for the batch job.
7. The built-in governors in pga_aggregate_target can limit sort areas to 100m, Overriding the PGA defaults made a large batch processes run more than 8x faster, but it can also cause sporadic ORA-04030 errors.:
"alter system set pga_aggregate_target=6G;
alter system set "_pga_max_size"=2000000000;

8. Sample instance reconfiguration scripts 
[ Courtesy Oracle Tips by Burleson Consulting ]


The UNIX scripts given below can be used to reconfigure the SGA between the OLTP and DSS without stopping the instance. The example assumes an isolated Oracle server with 8 gigabytes of RAM, with 10 percent of RAM reserved for UNIX overhead, leaving 7.2 gigabytes for Oracle and Oracle connections. The scripts are intended either for HP-UX or Solaris and accept the $ORACLE_SID as an argument.
This script could be run at 6:00 p.m. each evening in order to reconfigure Oracle for the memory-intensive batch tasks.
#!/bin/ksh

# First, we must set the environment . . . .
ORACLE_SID=$1
export ORACLE_SID
ORACLE_HOME=`cat /etc/oratab|grep ^$ORACLE_SID:|cut -f2 -d':'`
#ORACLE_HOME=`cat /var/opt/oracle/oratab|grep ^$ORACLE_SID:|cut -f2 -d':'`
export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$PATH
export PATH

$ORACLE_HOME/bin/sqlplus –s /nologin<<!
connect system/manager as sysdba;
alter system set db_cache_size =1500m;
alter system set shared_pool_size =500m;
alter system set pga_aggregate_target =4000m;
exit
!
The script below will be run at 6:00 a.m. each morning to reconfigure Oracle for the OLTP usage during the day.
#!/bin/ksh

# First, we must set the environment . . . .
ORACLE_SID=$1
export ORACLE_SID
ORACLE_HOME=`cat /etc/oratab|grep ^$ORACLE_SID:|cut -f2 -d':'`
#ORACLE_HOME=`cat /var/opt/oracle/oratab|grep ^$ORACLE_SID:|cut -f2 -d':'`
export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$PATH
export PATH

$ORACLE_HOME/bin/sqlplus –s /nologin<<!
connect system/manager as sysdba;

alter system set db_cache_size =4000m;
alter system set shared_pool_size =500m;
alter system set pga_aggregate_target =1500m;

exit
!
9. Make sure you remove the following init.ora parameters:

_optimizer_cost_based_transformation
_optimizer_features_enable

The default values should be used. In addition, you should disable the automatic stats job by running the command:

exec dbms_scheduler.disable('GATHER_STATS_JOB');

Incorrect configuration has been the #1 cause of bad execution plans on SQL's for customers who upgrade to 10G R2.






No comments:

Post a Comment