Oracle E-Business Suite 12.2 OACore JVM memory issues are common in production environments. One of the most critical errors Apps DBAs may see is:
java.lang.OutOfMemoryError: GC overhead limit exceeded
This error means the OACore JVM is spending too much time in Garbage Collection but is unable to release enough memory. As a result, users may experience application slowness, page hangs, login issues, or OAF screen failures.
1. Common Error Seen in OACore Logs
In the OACore server output/log file, you may see errors like:
<Error> <Socket> <BEA-000405> <Uncaught Throwable in processSockets>
java.lang.OutOfMemoryError: GC overhead limit exceeded
Other related errors may include:
java.lang.OutOfMemoryError: Java heap space
BEA-000337 ExecuteThread has been busy
BEA-000339 Thread has become unstuck
Unable to reserve connection
ThreadPool has stuck threads
2. What This Error Means
GC overhead limit exceeded means the JVM is almost out of heap memory.
The JVM is continuously trying to clean memory, but it is not able to recover enough memory.
In simple terms:
OACore JVM memory is exhausted.
Garbage Collection is running repeatedly.
Application threads become slow or stuck.
Users experience application slowness or hanging.
3. Typical User Symptoms
Users may report:
Application is slow
OAF pages are hanging
Login page is slow
Forms launch delay
Blank page after clicking responsibility
Submit button not responding
Intermittent page errors
4. First Step: Identify the Affected OACore
Login to the application tier and run:
ps -ef | grep -i oacore | grep -v grep
Example output:
oacore_server4 PID=4269
Note the affected OACore managed server and its PID.
5. Check OACore Logs
Go to the WebLogic server logs directory:
cd $EBS_DOMAIN_HOME/servers
Search for recent OACore errors:
find . -iname "*oacore*.out" -mtime -1
find . -iname "*oacore*.log" -mtime -1
Search inside logs:
grep -iE "OutOfMemory|GC overhead|Java heap space|BEA-000405|stuck thread|Unable to reserve connection" */logs/*
If you see:
java.lang.OutOfMemoryError: GC overhead limit exceeded
then the OACore JVM is under memory pressure.
6. Capture Evidence Before Restart
Before restarting OACore, capture diagnostics.
Create a directory:
mkdir -p /tmp/oacore_oom_$(date +%F_%H%M)
cd /tmp/oacore_oom_$(date +%F_%H%M)
Capture thread dumps:
jstack -l <PID> > jstack_1.txt
sleep 30
jstack -l <PID> > jstack_2.txt
sleep 30
jstack -l <PID> > jstack_3.txt
Capture heap summary:
jmap -heap <PID> > heap_summary.txt
Capture live heap histogram:
jmap -histo:live <PID> > heap_histo_live.txt
Capture GC utilization:
jstat -gcutil <PID> 5s 12 > gcutil.txt
7. How to Read GC Output
Run:
jstat -gcutil <PID> 5s 12
Focus on these columns:
| Column | Meaning | Warning Sign |
|---|---|---|
| O | Old Generation usage | 90% to 100% continuously |
| FGC | Full GC count | Increasing frequently |
| FGCT | Full GC time | Increasing continuously |
| YGC | Young GC count | Frequent is acceptable |
| GCT | Total GC time | Very high means GC pressure |
Bad sign:
O = 98% or 99%
FGC increasing every few seconds
FGCT increasing continuously
This confirms GC thrashing.
8. Check Current JVM Heap Size
Run:
ps -ef | grep -i oacore | grep Xmx
Look for values like:
-Xms2048m -Xmx2048m
If the heap is too small for the workload, OACore may hit memory exhaustion.
9. Validate from Database Side
From the database, check JDBC sessions created by the OACore PID:
SELECT s.inst_id,
s.sid,
s.serial#,
s.status,
s.username,
s.program,
s.process client_pid,
s.sql_id,
s.prev_sql_id,
s.event,
s.wait_class,
s.last_call_et
FROM gv$session s
WHERE s.program LIKE '%JDBC%'
AND s.process = '<OACORE_PID>'
ORDER BY s.last_call_et DESC;
If you see:
SQL_ID = NULL
EVENT = SQL*Net message from client
WAIT_CLASS = Idle
then the database is waiting for OACore to send the next request.
This confirms the database is not the bottleneck at that moment.
10. Find Abnormal JDBC Session Count
Use this query:
SELECT s.process client_pid,
COUNT(*) total_sessions,
SUM(CASE WHEN s.status='ACTIVE' THEN 1 ELSE 0 END) active_sessions,
SUM(CASE WHEN s.status='INACTIVE' THEN 1 ELSE 0 END) inactive_sessions
FROM gv$session s
WHERE s.program LIKE '%JDBC%'
GROUP BY s.process
ORDER BY total_sessions DESC;
If one OACore PID has an unusually high number of sessions, check for:
Connection leak
Stuck requests
JDBC pool saturation
Unhealthy load balancing
11. Check WebLogic Stuck Threads
Search OACore logs:
grep -iE "stuck|hogging|ExecuteThread|ThreadPool|BEA-000337|BEA-000339" $EBS_DOMAIN_HOME/servers/oacore_server*/logs/*
Important messages:
BEA-000337 ExecuteThread has been busy
BEA-000339 Thread has become unstuck
Many stuck threads indicate OACore is not processing requests normally.
12. Check JDBC Pool Saturation
From WebLogic Console, check the OACore data source.
Review:
Active Connections Current Count
Active Connections High Count
Waiting For Connection Current Count
Leaked Connection Count
Failures To Reconnect Count
Danger signs:
Waiting For Connection > 0
Leaked Connection Count > 0
Active Connections near Max Capacity
13. Analyze Heap Histogram
Run:
jmap -histo:live <PID> | head -50
Look for top objects:
| Object Pattern | Possible Meaning |
|---|---|
| byte[] | Large payloads, XML, files, attachments |
| char[] / String | Large text, session data, cached data |
| oracle.apps.fnd.framework | OAF object growth |
| oracle.jbo | BC4J/Application Module objects |
| weblogic.servlet | HTTP session objects |
| java.util.HashMap | Cache/session growth |
| XML classes | Large XML or BI Publisher payload |
This helps identify whether memory is consumed by XML, OAF pages, sessions, attachments, or custom code.
14. Immediate Mitigation
If OACore is unhealthy, restart only the affected managed server after collecting diagnostics.
admanagedsrvctl.sh stop oacore_server4
admanagedsrvctl.sh start oacore_server4
If graceful stop is not working, take approval and then use OS-level kill carefully:
kill -3 <PID>
kill <PID>
Avoid:
kill -9
unless the process is completely hung and approved.
15. Post-Restart Validation
Check OACore process:
ps -ef | grep -i oacore | grep -v grep
Check logs:
tail -100f $EBS_DOMAIN_HOME/servers/oacore_server4/logs/oacore_server4.out
Check GC behavior:
jstat -gcutil <NEW_PID> 5s 10
Healthy signs:
Old Generation is not stuck at 99%
Full GC is not continuously increasing
Users can access OAF pages normally
No new OutOfMemoryError appears
16. Permanent Fix Approach
16.1 Increase OACore Heap
Example:
-Xms4096m
-Xmx4096m
Do not blindly increase heap. First validate server RAM and total number of JVMs.
16.2 Add More OACore Managed Servers
If user load is high, distribute traffic across more OACore JVMs.
16.3 Review Custom OAF Pages
Common causes:
VO query fetching too many rows
Large LOV
Attachment rendering
Personalization issue
Session objects not released
Custom code memory leak
16.4 Enable Heap Dump on OOM
Add JVM options:
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/u01/heapdumps
Make sure the filesystem has enough space.
16.5 Review WebLogic JDBC Pool
Check:
Max Capacity
Initial Capacity
Connection Reserve Timeout
Inactive Connection Timeout
Leaked Connection Count
17. Final RCA Statement
Oracle EBS 12.2 OACore managed server became unhealthy due to JVM heap memory exhaustion.
OACore logs showed repeated java.lang.OutOfMemoryError: GC overhead limit exceeded along with BEA-000405 Uncaught Throwable in processSockets.
Database validation showed JDBC sessions mostly inactive and waiting on SQL*Net message from client, classified under Idle wait class. This confirmed that the database had completed prior SQL execution and was waiting for the application tier.
The root cause was isolated to OACore JVM memory pressure, where excessive garbage collection prevented the JVM from recovering sufficient heap memory.
Immediate mitigation was to capture thread dumps, heap summary, GC statistics, and restart the affected OACore managed server.
Permanent corrective actions include reviewing JVM heap sizing, GC behavior, stuck threads, JDBC pool utilization, custom OAF memory usage, and enabling heap dump generation for future OutOfMemoryError analysis.
18. Conclusion
When OACore is unhealthy and the database shows idle JDBC sessions, do not immediately blame SQL or the database.
A correct Apps DBA investigation should correlate:
OACore logs
JVM heap usage
GC behavior
Thread dumps
JDBC sessions
Database wait events
WebLogic health
In this case, the real issue is not SQL tuning.
The real issue is:
OACore JVM memory exhaustion causing GC overhead limit exceeded.
No comments:
Post a Comment