Friday, May 15, 2026

plan migration

Oracle SQL Plan Migration Runbook

Objective

This runbook explains how to move a good execution plan from a source environment (Source Environment) to a Production environment using:

  • SQL Tuning Set (STS)
  • Data Pump (expdp/impdp)
  • SQL Plan Management (SPM)

This approach is commonly used by Oracle Apps DBAs during:

  • SQL Plan Regression
  • Month-End Performance Issues
  • Concurrent Program Slow Performance
  • Optimizer Plan Changes after Statistics Gathering
  • Oracle EBS 12.2 Performance Stabilization

High-Level Flow

Identify Good Plan
        ↓
Create SQL Tuning Set (STS)
        ↓
Load Good SQL into STS
        ↓
Create STS Staging Table
        ↓
Export STS Table using Data Pump
        ↓
Transfer Dump File to Production
        ↓
Import STS Table into Production
        ↓
Unpack STS
        ↓
Load Plan into SQL Plan Baseline
        ↓
Purge Old Cursor from Shared Pool
        ↓
Re-execute SQL / Concurrent Program
        ↓
Validate Improved Execution Plan

Source Environment Steps (Source Environment)

Step 1: Identify the Good SQL Plan

Find the SQL_ID and PLAN_HASH_VALUE of the good plan.

SELECT sql_id,
       plan_hash_value
FROM   v$sqlarea
WHERE  sql_id IN ('f9k42ab71mn8q');

OR

SELECT sql_id,
       plan_hash_value
FROM   v$sql
WHERE  sql_id IN ('f9k42ab71mn8q');

Step 2: Create Empty SQL Tuning Set (STS)

BEGIN
  DBMS_SQLTUNE.CREATE_SQLSET(
      sqlset_name => 'f9k42ab71mn8q_STS',
      description => 'STS to move better plan to Production');
END;
/

Step 3: Load SQL Information into STS

DECLARE
  s_sqlarea_cursor DBMS_SQLTUNE.SQLSET_CURSOR;
BEGIN

  OPEN s_sqlarea_cursor FOR
  SELECT VALUE(p)
  FROM TABLE(
       DBMS_SQLTUNE.SELECT_CURSOR_CACHE(
       'sql_id = ''f9k42ab71mn8q''
        AND plan_hash_value = 1847263512')) p;

  DBMS_SQLTUNE.LOAD_SQLSET(
      sqlset_name     => 'f9k42ab71mn8q_STS',
      populate_cursor => s_sqlarea_cursor);

END;
/

Step 4: Verify SQL Tuning Set Contents

SELECT name,
       statement_count,
       description
FROM   dba_sqlset;

Formatting commands:

COLUMN sql_text FORMAT a30
COLUMN sch FORMAT a3
COLUMN elapsed FORMAT 999999999

Query STS contents:

SELECT sql_id,
       parsing_schema_name AS "SCH",
       sql_text,
       elapsed_time AS "ELAPSED",
       buffer_gets
FROM TABLE(
     DBMS_SQLTUNE.SELECT_SQLSET('f9k42ab71mn8q_STS'));

Detailed verification:

SELECT *
FROM TABLE(
     DBMS_SQLTUNE.SELECT_SQLSET('f9k42ab71mn8q_STS'));

Step 5: Create Staging Table

Note: Table names and parameters are case-sensitive.

EXEC DBMS_SQLTUNE.CREATE_STGTAB_SQLSET(
     table_name => 'TEST');

Step 6: Pack STS into Staging Table

BEGIN

  DBMS_SQLTUNE.PACK_STGTAB_SQLSET(
      sqlset_name         => 'f9k42ab71mn8q_STS',
      sqlset_owner        => 'SYSTEM',
      staging_table_name  => 'TEST',
      staging_schema_owner=> 'SYSTEM');

END;
/

Step 7: Create Oracle Directory Object

CREATE DIRECTORY TEST AS
'/opt/mis/ebs_backup_lv/backup_1/EBS_SQLSET_BACKUP';

Step 8: Export Staging Table using Data Pump

expdp system DIRECTORY=TEST \
DUMPFILE=f9k42ab71mn8q_STS.dmp \
TABLES=TEST

Step 9: Transfer Dump File to Production

Transfer the dump file using:

  • SCP
  • FTP
  • SFTP

Example:

scp f9k42ab71mn8q_STS.dmp oracle@targetserver:/backup

Target Environment Steps (Production)

Step 10: Import the Dump File

impdp system DIRECTORY=TEST \
DUMPFILE=f9k42ab71mn8q_STS.dmp \
TABLES=TEST

Step 11: Unpack SQL Tuning Set

BEGIN

  DBMS_SQLTUNE.UNPACK_STGTAB_SQLSET(
      sqlset_name          => '%',
      sqlset_owner         => 'SYSTEM',
      replace              => TRUE,
      staging_table_name   => 'TEST',
      staging_schema_owner => 'SYSTEM');

END;
/

Step 12: Load Plan from STS into SQL Plan Baseline

VARIABLE v_plan_cnt NUMBER

EXECUTE :v_plan_cnt := DBMS_SPM.LOAD_PLANS_FROM_SQLSET(
         sqlset_name => 'f9k42ab71mn8q_STS',
         sqlset_owner => 'SYSTEM',
         basic_filter =>
         'sql_id = ''f9k42ab71mn8q''
          AND plan_hash_value = 1847263512');

Step 13: Purge Existing SQL from Shared Pool

This forces Oracle to re-parse and pick the new SQL Plan Baseline.

Generate purge command:

SELECT 'exec DBMS_SHARED_POOL.PURGE('''
       || ADDRESS || ',' || HASH_VALUE || ''',''C'');'
FROM   v$sqlarea
WHERE  sql_id IN ('f9k42ab71mn8q');

Execute generated command:

EXEC DBMS_SHARED_POOL.PURGE(
'0000001006B396C8,2113289046',
'C');

Step 14: Alternative Method - Load from Cursor Cache

DECLARE
  i NATURAL;
BEGIN

  i := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(
       'f9k42ab71mn8q',
       1847263512);

END;
/

Validation Queries

Check SQL Baselines

SELECT sql_handle,
       plan_name,
       enabled,
       accepted,
       fixed
FROM   dba_sql_plan_baselines
WHERE  signature IN (
       SELECT exact_matching_signature
       FROM   v$sql
       WHERE  sql_id='f9k42ab71mn8q');

Verify Active Plan

SELECT sql_id,
       child_number,
       plan_hash_value,
       executions,
       elapsed_time
FROM   v$sql
WHERE  sql_id='f9k42ab71mn8q';

Check if Baseline is Used

SELECT sql_id,
       sql_plan_baseline,
       plan_hash_value
FROM   v$sql
WHERE  sql_id='f9k42ab71mn8q';

Oracle Apps DBA Production Checklist

Check Status
Good SQL_ID identified
Correct PLAN_HASH_VALUE captured
STS created successfully
SQL loaded into STS
Staging table created
STS packed successfully
Data Pump export completed
Dump transferred securely
Import completed in Production
STS unpacked successfully
SQL Plan Baseline loaded
Old cursor purged
Concurrent request rerun
Improved performance validated

Real-Time Oracle EBS Scenario

Problem

A month-end concurrent request that normally completes in 2 minutes suddenly started taking 4 hours after statistics gathering.

Root Cause

Optimizer selected a new bad execution plan with:

  • Full Table Scan
  • High Logical Reads
  • Excessive Nested Loop Operations
  • Large TEMP Usage

Solution

DBA identified a good historical plan from Source Environment and migrated it to Production using:

  • SQL Tuning Set (STS)
  • SQL Plan Baseline (SPM)

Result

Before After
Runtime: 4 Hours Runtime: 2 Minutes
TEMP Spike Stable TEMP
CPU High CPU Normal
Business Delay Business Success

Important Notes

Best Practices

  • Always validate the plan in lower environments first.
  • Never purge shared pool aggressively in peak production hours.
  • Take business approval before rerunning concurrent requests.
  • Verify plan stability after stats gathering.
  • Monitor AWR and ASH after implementation.

Important DBA Views

View Purpose
V$SQL Active SQL details
V$SQLAREA Aggregated SQL statistics
DBA_SQLSET SQL Tuning Sets
DBA_SQL_PLAN_BASELINES SQL Baselines
V$SQL_MONITOR Real-time SQL monitoring
DBA_HIST_SQLSTAT Historical SQL statistics
DBA_HIST_ACTIVE_SESS_HISTORY ASH performance analysis

Conclusion

Using SQL Plan Management (SPM) and SQL Tuning Sets (STS) is one of the safest methods to stabilize SQL performance in Oracle EBS 12.2 Production environments.

This approach helps Oracle Apps DBAs:

  • Avoid risky code changes
  • Restore performance quickly
  • Reduce month-end failures
  • Stabilize execution plans
  • Improve business confidence

It is a critical real-world DBA skill for handling production SQL regressions.

Oracle SQL Plan Migration Runbook

 

Oracle SQL Plan Migration Runbook

Objective

This runbook explains how to move a good execution plan from a source environment (Source Environment) to a Production environment using:

  • SQL Tuning Set (STS)
  • Data Pump (expdp/impdp)
  • SQL Plan Management (SPM)

This approach is commonly used by Oracle Apps DBAs during:

  • SQL Plan Regression
  • Month-End Performance Issues
  • Concurrent Program Slow Performance
  • Optimizer Plan Changes after Statistics Gathering
  • Oracle EBS 12.2 Performance Stabilization

High-Level Flow

Identify Good Plan
Create SQL Tuning Set (STS)
Load Good SQL into STS
Create STS Staging Table
Export STS Table using Data Pump
Transfer Dump File to Production
Import STS Table into Production
Unpack STS
Load Plan into SQL Plan Baseline
Purge Old Cursor from Shared Pool
Re-execute SQL / Concurrent Program
Validate Improved Execution Plan

Source Environment Steps (Source Environment)

Step 1: Identify the Good SQL Plan

Find the SQL_ID and PLAN_HASH_VALUE of the good plan.

SELECT sql_id,
plan_hash_value
FROM v$sqlarea
WHERE sql_id IN ('f9k42ab71mn8q');

OR

SELECT sql_id,
plan_hash_value
FROM v$sql
WHERE sql_id IN ('f9k42ab71mn8q');

Step 2: Create Empty SQL Tuning Set (STS)

BEGIN
DBMS_SQLTUNE.CREATE_SQLSET(
sqlset_name => 'f9k42ab71mn8q_STS',
description => 'STS to move better plan to Production');
END;
/

Step 3: Load SQL Information into STS

DECLARE
s_sqlarea_cursor DBMS_SQLTUNE.SQLSET_CURSOR;
BEGIN

OPEN s_sqlarea_cursor FOR
SELECT VALUE(p)
FROM TABLE(
DBMS_SQLTUNE.SELECT_CURSOR_CACHE(
'sql_id = ''f9k42ab71mn8q''
AND plan_hash_value = 1847263512')) p;

DBMS_SQLTUNE.LOAD_SQLSET(
sqlset_name => 'f9k42ab71mn8q_STS',
populate_cursor => s_sqlarea_cursor);

END;
/

Step 4: Verify SQL Tuning Set Contents

SELECT name,
statement_count,
description
FROM dba_sqlset;

Formatting commands:

COLUMN sql_text FORMAT a30
COLUMN sch FORMAT a3
COLUMN elapsed FORMAT 999999999

Query STS contents:

SELECT sql_id,
parsing_schema_name AS "SCH",
sql_text,
elapsed_time AS "ELAPSED",
buffer_gets
FROM TABLE(
DBMS_SQLTUNE.SELECT_SQLSET('f9k42ab71mn8q_STS'));

Detailed verification:

SELECT *
FROM TABLE(
DBMS_SQLTUNE.SELECT_SQLSET('f9k42ab71mn8q_STS'));

Step 5: Create Staging Table

Note: Table names and parameters are case-sensitive.

EXEC DBMS_SQLTUNE.CREATE_STGTAB_SQLSET(
table_name => 'TEST');

Step 6: Pack STS into Staging Table

BEGIN

DBMS_SQLTUNE.PACK_STGTAB_SQLSET(
sqlset_name => 'f9k42ab71mn8q_STS',
sqlset_owner => 'SYSTEM',
staging_table_name => 'TEST',
staging_schema_owner=> 'SYSTEM');

END;
/

Step 7: Create Oracle Directory Object

CREATE DIRECTORY TEST AS
'tmp/EBS_SQLSET_BACKUP';

Step 8: Export Staging Table using Data Pump

expdp system DIRECTORY=TEST \
DUMPFILE=f9k42ab71mn8q_STS.dmp \
TABLES=TEST

Step 9: Transfer Dump File to Production

Transfer the dump file using:

  • SCP
  • FTP
  • SFTP

Example:

scp f9k42ab71mn8q_STS.dmp oracle@targetserver:/backup

Target Environment Steps (Production)

Step 10: Import the Dump File

impdp system DIRECTORY=TEST \
DUMPFILE=f9k42ab71mn8q_STS.dmp \
TABLES=TEST

Step 11: Unpack SQL Tuning Set

BEGIN

DBMS_SQLTUNE.UNPACK_STGTAB_SQLSET(
sqlset_name => '%',
sqlset_owner => 'SYSTEM',
replace => TRUE,
staging_table_name => 'TEST',
staging_schema_owner => 'SYSTEM');

END;
/

Step 12: Load Plan from STS into SQL Plan Baseline

VARIABLE v_plan_cnt NUMBER

EXECUTE :v_plan_cnt := DBMS_SPM.LOAD_PLANS_FROM_SQLSET(
sqlset_name => 'f9k42ab71mn8q_STS',
sqlset_owner => 'SYSTEM',
basic_filter =>
'sql_id = ''f9k42ab71mn8q''
AND plan_hash_value = 1847263512');

Step 13: Purge Existing SQL from Shared Pool

This forces Oracle to re-parse and pick the new SQL Plan Baseline.

Generate purge command:

SELECT 'exec DBMS_SHARED_POOL.PURGE('''
|| ADDRESS || ',' || HASH_VALUE || ''',''C'');'
FROM v$sqlarea
WHERE sql_id IN ('f9k42ab71mn8q');

Execute generated command:

EXEC DBMS_SHARED_POOL.PURGE(
'0000001006B396C8,2113289046',
'C');

Step 14: Alternative Method - Load from Cursor Cache

DECLARE
i NATURAL;
BEGIN

i := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(
'f9k42ab71mn8q',
1847263512);

END;
/

Validation Queries

Check SQL Baselines

SELECT sql_handle,
plan_name,
enabled,
accepted,
fixed
FROM dba_sql_plan_baselines
WHERE signature IN (
SELECT exact_matching_signature
FROM v$sql
WHERE sql_id='f9k42ab71mn8q');

Verify Active Plan

SELECT sql_id,
child_number,
plan_hash_value,
executions,
elapsed_time
FROM v$sql
WHERE sql_id='f9k42ab71mn8q';

Check if Baseline is Used

SELECT sql_id,
sql_plan_baseline,
plan_hash_value
FROM v$sql
WHERE sql_id='f9k42ab71mn8q';

Oracle Apps DBA Production Checklist

CheckStatus
Good SQL_ID identified
Correct PLAN_HASH_VALUE captured
STS created successfully
SQL loaded into STS
Staging table created
STS packed successfully
Data Pump export completed
Dump transferred securely
Import completed in Production
STS unpacked successfully
SQL Plan Baseline loaded
Old cursor purged
Concurrent request rerun
Improved performance validated

Real-Time Oracle EBS Scenario

Problem

A month-end concurrent request that normally completes in 2 minutes suddenly started taking 4 hours after statistics gathering.

Root Cause

Optimizer selected a new bad execution plan with:

  • Full Table Scan
  • High Logical Reads
  • Excessive Nested Loop Operations
  • Large TEMP Usage

Solution

DBA identified a good historical plan from Source Environment and migrated it to Production using:

  • SQL Tuning Set (STS)
  • SQL Plan Baseline (SPM)

Result

BeforeAfter
Runtime: 4 HoursRuntime: 2 Minutes
TEMP SpikeStable TEMP
CPU HighCPU Normal
Business DelayBusiness Success

Important Notes

Best Practices

  • Always validate the plan in lower environments first.
  • Never purge shared pool aggressively in peak production hours.
  • Take business approval before rerunning concurrent requests.
  • Verify plan stability after stats gathering.
  • Monitor AWR and ASH after implementation.

Important DBA Views

ViewPurpose
V$SQLActive SQL details
V$SQLAREAAggregated SQL statistics
DBA_SQLSETSQL Tuning Sets
DBA_SQL_PLAN_BASELINESSQL Baselines
V$SQL_MONITORReal-time SQL monitoring
DBA_HIST_SQLSTATHistorical SQL statistics
DBA_HIST_ACTIVE_SESS_HISTORYASH performance analysis

Conclusion

Using SQL Plan Management (SPM) and SQL Tuning Sets (STS) is one of the safest methods to stabilize SQL performance in Oracle EBS 12.2 Production environments.

This approach helps Oracle Apps DBAs:

  • Avoid risky code changes
  • Restore performance quickly
  • Reduce month-end failures
  • Stabilize execution plans
  • Improve business confidence

It is a critical real-world DBA skill for handling production SQL regressions.

Wednesday, May 13, 2026

Pre Prod Analysis

 -- 1. Confirm concurrent request details

SELECT request_id,
phase_code,
status_code,
actual_start_date,
actual_completion_date,
argument_text,
logfile_name,
outfile_name
FROM apps.fnd_concurrent_requests
WHERE request_id = 598460495;
-- 2. Get program/application details
SELECT r.request_id,
p.concurrent_program_name,
pt.user_concurrent_program_name,
a.application_short_name,
r.phase_code,
r.status_code
FROM apps.fnd_concurrent_requests r,
apps.fnd_concurrent_programs p,
apps.fnd_concurrent_programs_tl pt,
apps.fnd_application a
WHERE r.concurrent_program_id = p.concurrent_program_id
AND p.concurrent_program_id = pt.concurrent_program_id
AND p.application_id = a.application_id
AND p.application_id = pt.application_id
AND pt.language = 'US'
AND r.request_id = 598460495;
-- 3. Validate current database session
SELECT s.inst_id,
s.sid,
s.serial#,
s.username,
s.status,
s.sql_id,
s.prev_sql_id,
s.event,
s.wait_class,
s.seconds_in_wait,
s.blocking_session,
s.module,
s.action,
s.client_identifier,
s.machine,
s.program
FROM gv$session s
WHERE s.sid = 2581
AND s.serial# = 24100;
-- 4. Check if it is really sleeping/retry loop
SELECT inst_id,
sid,
serial#,
event,
wait_class,
state,
seconds_in_wait,
sql_id,
prev_sql_id,
plsql_entry_object_id,
plsql_object_id,
plsql_subprogram_id
FROM gv$session
WHERE sid = 2581
AND serial# = 24100;
-- 5. Identify PL/SQL object currently involved
SELECT s.sid,
s.sql_id,
s.event,
o.owner,
o.object_name,
o.object_type
FROM gv$session s
LEFT JOIN dba_objects o
ON s.plsql_object_id = o.object_id
WHERE s.sid = 2581
AND s.serial# = 24100;
-- 6. Get SQL text for SQL_ID
SELECT sql_id,
sql_text
FROM gv$sql
WHERE sql_id = 'dxasruz3nkuvf';
-- 7. Check ASH history for this request/session
SELECT sample_time,
session_id,
session_serial#,
sql_id,
event,
wait_class,
module,
action,
blocking_session,
current_obj#
FROM gv$active_session_history
WHERE session_id = 2581
AND session_serial# = 24100
ORDER BY sample_time DESC;
-- 8. If ASH aged out, check AWR ASH
SELECT sample_time,
instance_number,
session_id,
session_serial#,
sql_id,
event,
wait_class,
module,
action,
blocking_session
FROM dba_hist_active_sess_history
WHERE session_id = 2581
AND session_serial# = 24100
AND sample_time >= SYSDATE - 1
ORDER BY sample_time DESC;
-- 9. Check if request is waiting for another request
SELECT request_id,
parent_request_id,
priority_request_id,
phase_code,
status_code,
hold_flag,
requested_start_date,
actual_start_date
FROM apps.fnd_concurrent_requests
WHERE request_id = 598460495
OR parent_request_id = 598460495;
-- 10. Check incompatibility/blocking at concurrent manager level
SELECT r.request_id,
r.phase_code,
r.status_code,
r.hold_flag,
r.requested_start_date,
r.actual_start_date,
r.controlling_manager,
r.concurrent_program_id
FROM apps.fnd_concurrent_requests r
WHERE r.phase_code = 'R'
AND r.status_code = 'R'
ORDER BY r.actual_start_date;

Initial RCA direction:

The first request is not showing a normal I/O wait. It is waiting on PL/SQL lock timer, which commonly indicates the program is intentionally sleeping or polling inside package logic. Since it has already run for around 128 minutes, the DBA should verify whether the TPA Monitor program is stuck in a retry loop, waiting for dependent activity, waiting for another concurrent request, or controlled by application logic.

Do not kill immediately. First collect:

-- Evidence snapshot
SELECT SYSDATE evidence_time,
inst_id,
sid,
serial#,
sql_id,
prev_sql_id,
event,
wait_class,
seconds_in_wait,
module,
action,
client_identifier
FROM gv$session
WHERE sid = 2581
AND serial# = 24100;

Then check the request log file from logfile_name. If the log is not moving and ASH repeatedly shows PL/SQL lock timer, then update business/application owner:

Suggested update:

The request  is currently active for ~128 minutes. Database session is not waiting on I/O or CPU bottleneck; it is mainly waiting on PL/SQL lock timer, which indicates the program is sleeping/polling inside application PL/SQL logic. DBA is validating the request log, PL/SQL package, dependency with other requests, and ASH history before taking any action. Recommended not to kill until application/business confirmation is received.









Deep Dive: Oracle EBS 12.2 OACore JVM OutOfMemoryError Troubleshooting Runbook


Oracle E-Business Suite 12.2 OACore JVM memory issues are common in production environments. One of the most critical errors Apps DBAs may see is:

java.lang.OutOfMemoryError: GC overhead limit exceeded

This error means the OACore JVM is spending too much time in Garbage Collection but is unable to release enough memory. As a result, users may experience application slowness, page hangs, login issues, or OAF screen failures.


1. Common Error Seen in OACore Logs

In the OACore server output/log file, you may see errors like:

<Error> <Socket> <BEA-000405> <Uncaught Throwable in processSockets>
java.lang.OutOfMemoryError: GC overhead limit exceeded

Other related errors may include:

java.lang.OutOfMemoryError: Java heap space
BEA-000337 ExecuteThread has been busy
BEA-000339 Thread has become unstuck
Unable to reserve connection
ThreadPool has stuck threads

2. What This Error Means

GC overhead limit exceeded means the JVM is almost out of heap memory.

The JVM is continuously trying to clean memory, but it is not able to recover enough memory.

In simple terms:

OACore JVM memory is exhausted.
Garbage Collection is running repeatedly.
Application threads become slow or stuck.
Users experience application slowness or hanging.

3. Typical User Symptoms

Users may report:

Application is slow
OAF pages are hanging
Login page is slow
Forms launch delay
Blank page after clicking responsibility
Submit button not responding
Intermittent page errors

4. First Step: Identify the Affected OACore

Login to the application tier and run:

ps -ef | grep -i oacore | grep -v grep

Example output:

oacore_server4  PID=4269

Note the affected OACore managed server and its PID.


5. Check OACore Logs

Go to the WebLogic server logs directory:

cd $EBS_DOMAIN_HOME/servers

Search for recent OACore errors:

find . -iname "*oacore*.out" -mtime -1
find . -iname "*oacore*.log" -mtime -1

Search inside logs:

grep -iE "OutOfMemory|GC overhead|Java heap space|BEA-000405|stuck thread|Unable to reserve connection" */logs/*

If you see:

java.lang.OutOfMemoryError: GC overhead limit exceeded

then the OACore JVM is under memory pressure.


6. Capture Evidence Before Restart

Before restarting OACore, capture diagnostics.

Create a directory:

mkdir -p /tmp/oacore_oom_$(date +%F_%H%M)
cd /tmp/oacore_oom_$(date +%F_%H%M)

Capture thread dumps:

jstack -l <PID> > jstack_1.txt
sleep 30
jstack -l <PID> > jstack_2.txt
sleep 30
jstack -l <PID> > jstack_3.txt

Capture heap summary:

jmap -heap <PID> > heap_summary.txt

Capture live heap histogram:

jmap -histo:live <PID> > heap_histo_live.txt

Capture GC utilization:

jstat -gcutil <PID> 5s 12 > gcutil.txt

7. How to Read GC Output

Run:

jstat -gcutil <PID> 5s 12

Focus on these columns:

ColumnMeaningWarning Sign
OOld Generation usage90% to 100% continuously
FGCFull GC countIncreasing frequently
FGCTFull GC timeIncreasing continuously
YGCYoung GC countFrequent is acceptable
GCTTotal GC timeVery high means GC pressure

Bad sign:

O = 98% or 99%
FGC increasing every few seconds
FGCT increasing continuously

This confirms GC thrashing.


8. Check Current JVM Heap Size

Run:

ps -ef | grep -i oacore | grep Xmx

Look for values like:

-Xms2048m -Xmx2048m

If the heap is too small for the workload, OACore may hit memory exhaustion.


9. Validate from Database Side

From the database, check JDBC sessions created by the OACore PID:

SELECT s.inst_id,
s.sid,
s.serial#,
s.status,
s.username,
s.program,
s.process client_pid,
s.sql_id,
s.prev_sql_id,
s.event,
s.wait_class,
s.last_call_et
FROM gv$session s
WHERE s.program LIKE '%JDBC%'
AND s.process = '<OACORE_PID>'
ORDER BY s.last_call_et DESC;

If you see:

SQL_ID     = NULL
EVENT = SQL*Net message from client
WAIT_CLASS = Idle

then the database is waiting for OACore to send the next request.

This confirms the database is not the bottleneck at that moment.


10. Find Abnormal JDBC Session Count

Use this query:

SELECT s.process client_pid,
COUNT(*) total_sessions,
SUM(CASE WHEN s.status='ACTIVE' THEN 1 ELSE 0 END) active_sessions,
SUM(CASE WHEN s.status='INACTIVE' THEN 1 ELSE 0 END) inactive_sessions
FROM gv$session s
WHERE s.program LIKE '%JDBC%'
GROUP BY s.process
ORDER BY total_sessions DESC;

If one OACore PID has an unusually high number of sessions, check for:

Connection leak
Stuck requests
JDBC pool saturation
Unhealthy load balancing

11. Check WebLogic Stuck Threads

Search OACore logs:

grep -iE "stuck|hogging|ExecuteThread|ThreadPool|BEA-000337|BEA-000339" $EBS_DOMAIN_HOME/servers/oacore_server*/logs/*

Important messages:

BEA-000337 ExecuteThread has been busy
BEA-000339 Thread has become unstuck

Many stuck threads indicate OACore is not processing requests normally.


12. Check JDBC Pool Saturation

From WebLogic Console, check the OACore data source.

Review:

Active Connections Current Count
Active Connections High Count
Waiting For Connection Current Count
Leaked Connection Count
Failures To Reconnect Count

Danger signs:

Waiting For Connection > 0
Leaked Connection Count > 0
Active Connections near Max Capacity

13. Analyze Heap Histogram

Run:

jmap -histo:live <PID> | head -50

Look for top objects:

Object PatternPossible Meaning
byte[]Large payloads, XML, files, attachments
char[] / StringLarge text, session data, cached data
oracle.apps.fnd.frameworkOAF object growth
oracle.jboBC4J/Application Module objects
weblogic.servletHTTP session objects
java.util.HashMapCache/session growth
XML classesLarge XML or BI Publisher payload

This helps identify whether memory is consumed by XML, OAF pages, sessions, attachments, or custom code.


14. Immediate Mitigation

If OACore is unhealthy, restart only the affected managed server after collecting diagnostics.

admanagedsrvctl.sh stop oacore_server4
admanagedsrvctl.sh start oacore_server4

If graceful stop is not working, take approval and then use OS-level kill carefully:

kill -3 <PID>
kill <PID>

Avoid:

kill -9

unless the process is completely hung and approved.


15. Post-Restart Validation

Check OACore process:

ps -ef | grep -i oacore | grep -v grep

Check logs:

tail -100f $EBS_DOMAIN_HOME/servers/oacore_server4/logs/oacore_server4.out

Check GC behavior:

jstat -gcutil <NEW_PID> 5s 10

Healthy signs:

Old Generation is not stuck at 99%
Full GC is not continuously increasing
Users can access OAF pages normally
No new OutOfMemoryError appears

16. Permanent Fix Approach

16.1 Increase OACore Heap

Example:

-Xms4096m
-Xmx4096m

Do not blindly increase heap. First validate server RAM and total number of JVMs.


16.2 Add More OACore Managed Servers

If user load is high, distribute traffic across more OACore JVMs.


16.3 Review Custom OAF Pages

Common causes:

VO query fetching too many rows
Large LOV
Attachment rendering
Personalization issue
Session objects not released
Custom code memory leak

16.4 Enable Heap Dump on OOM

Add JVM options:

-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/u01/heapdumps

Make sure the filesystem has enough space.


16.5 Review WebLogic JDBC Pool

Check:

Max Capacity
Initial Capacity
Connection Reserve Timeout
Inactive Connection Timeout
Leaked Connection Count

17. Final RCA Statement

Oracle EBS 12.2 OACore managed server became unhealthy due to JVM heap memory exhaustion.

OACore logs showed repeated java.lang.OutOfMemoryError: GC overhead limit exceeded along with BEA-000405 Uncaught Throwable in processSockets.

Database validation showed JDBC sessions mostly inactive and waiting on SQL*Net message from client, classified under Idle wait class. This confirmed that the database had completed prior SQL execution and was waiting for the application tier.

The root cause was isolated to OACore JVM memory pressure, where excessive garbage collection prevented the JVM from recovering sufficient heap memory.

Immediate mitigation was to capture thread dumps, heap summary, GC statistics, and restart the affected OACore managed server.

Permanent corrective actions include reviewing JVM heap sizing, GC behavior, stuck threads, JDBC pool utilization, custom OAF memory usage, and enabling heap dump generation for future OutOfMemoryError analysis.

18. Conclusion

When OACore is unhealthy and the database shows idle JDBC sessions, do not immediately blame SQL or the database.

A correct Apps DBA investigation should correlate:

OACore logs
JVM heap usage
GC behavior
Thread dumps
JDBC sessions
Database wait events
WebLogic health

In this case, the real issue is not SQL tuning.

The real issue is:

OACore JVM memory exhaustion causing GC overhead limit exceeded.