Monday, October 31, 2011

How to Turn Archiving ON and OFF in Oracle RDBMS [ID 69739.1]


How to Turn Archiving ON and OFF in Oracle RDBMS [ID 69739.1]

Modified 06-OCT-2011 Type BULLETIN Status PUBLISHED

In this Document
Purpose
Scope and Application
How to Turn Archiving ON and OFF in Oracle RDBMS
Turning Archiving On and Off
Setting the Initial Database Archiving Mode
Changing the Database Archiving Mode
To Prepare to Switch Database Archiving Mode
References


Applies to:

Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.2.0.2 - Release: 10.2 to 11.2
Information in this document applies to any platform.
***Checked for relevance on 01-Mar-2011***

Purpose

This article gives an overview of database archiving.

In case you may want or need more about your current topic - please also access the Backup & Recover Community of Customers and Oracle Specialists directly via:
https://communities.oracle.com/portal/server.pt/community/database_backup_and_recovery/243

Scope and Application

The intended audience is database administrators.

How to Turn Archiving ON and OFF in Oracle RDBMS

Turning Archiving On and Off

You set a database's initial archiving mode as part of database creation. Usually, you can use the default of NOARCHIVELOG mode at database creation
because there is no need to archive the redo information generated at that time. After creating the database, decide whether to change from the initial archiving mode.

After a database has been created, you can switch the database's archiving mode on demand. However, you should generally not switch the database between archiving modes.

NOTE: If a database is automatically created during Oracle installation, the initial archiving mode of the database is operating system specific. See your operating system-specific Oracle documentation.

ARCHIVELOG mode is necessary for creating on-line backups and for certain types of database recovery. Configuring the database to operate in
ARCHIVELOG mode allows the user to perform complete and point-in-time recovery from media (disk) failures using off-line or on-line backups. If
ARCHIVELOG mode is disabled, the database can be restored from a backup in case of failure, but it cannot be rolled forward from that to a point when failure occurred.

Oracle recommends ARCHIVELOG mode for all production databases

Setting the Initial Database Archiving Mode

When you create the database, you set the initial archiving mode of the redo log in the CREATE DATABASE statement. If you do not specify either ARCHIVELOG or NOARCHIVELOG, NOARCHIVELOG is the default. To verify database mode, execute following statement:


SQL> Select NAME, CREATED, LOG_MODE, CHECKPOINT_CHANGE#, ARCHIVE_CHANGE#
from V$DATABASE

NAME CREATED LOG_MODE CHECKPOINT_CHANGE# ARCHIVE_CHANGE#
--------- --------- ------------ ------------------ ---------------
O112 19-MAR-09 NOARCHIVELOG 1438426 135961

Changing the Database Archiving Mode

There are "init.ora" parameters you need to modify in order to properly handle your database being in archive log mode. They are:

LOG_ARCHIVE_DEST
LOG_ARCHIVE_FORMAT

LOG_ARCHIVE_DEST:
This parameter specifies the directory where your archive logs will be placed.

LOG_ARCHIVE_FORMAT:
This parameter names the archive logs in this format. For example, if your format is: arch%s.arc

Your log files will be called: arch1.arc, arch2.arc, arch3.arc where the '1', '2', '3', etc is the sequence number.

To Prepare to Switch Database Archiving Mode

1. Shut down the database instance.

SQL> shutdown immediate

An open database must be closed and dismounted and any associated instances shut down before the database's archiving mode can be switched. Archiving cannot be disabled if any datafiles need media recovery.

2. Backup the database.

This backup can be used with the archive logs that you will generate.

3. Perform any operating system specific steps (optional).

4. Start up a new instance and mount, but do not open the database.

SQL> startup mount

NOTE: If you are using the Oracle Parallel Server, you must mount the database exclusively using one instance to switch the database's archiving mode.

5. Put the database into archivelog mode

SQL> alter database archivelog;

NOTE: You can also use

SQL> alter database noarchivelog

to take the database out of archivelog mode

6. Open the database.

SQL> alter database open;

7. Verify your database is now in archivelog mode.

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 22
Next log sequence to archive 24
Current log sequence 24

8. Archive all your redo logs at this point.

SQL> archive log all;

9. Ensure these newly created Archive log files are added to the backup process.

See the Administration guide & Backup and Recovery guide for more information about switching the archiving mode when using the Oracle Parallel Server.

How can i tell what is installed and licensed after finishing an EBS install ?

How can i tell what is installed and licensed after finishing an install ?


The Script AD_TOP/sql/adutconf.sql the APPS user schema will generate a detailed file(adutconf.lst) with information about the database configuration and what products are installed and licensed after the rapid installation process completes

Labbaik..….Allaahumma Labbaik

Labbaik..….Allaahumma Labbaik


As I pack my belongings..... and get ready for this great Journey
I cry with gratitude..…once more You've blessed me
Tears flowing…..in ecstasy my heart flies
My soul sings….my whole being cries
Labbaik....Allaahumma Labbaik

But what are these words..... what do they really mean?
Do I appreciate them?…Do I just repeat what I've seen?
Do I really understand….
Do I even comprehend….
Labbaik....Allaahumma Labbaik

Labbaik means I am here, Ya Rabbee.....I respond to Your call
With a heart full of hope….for that Mercy that envelops all
I'm here to serve You……I'm here to comply
To You I submit……until the day I die
Labbaik..….Allaahumma Labbaik

Labbaik means total devotion…..submission to Allaah Almighty
Labbaik means sincere obedience and commitment completely
Labbaik means surrender
…..for now and for ever
Labbaik….Allaahumma Labbaik

Labbaik means I'm sorry…..so sorry for all my past
Sorry for my mistakes and my shortcomings of last
I beg You, ya Allaah…..please forgive me
I repent to you, O Maulaa……do accept me
Labbaik....Allaahumma Labbaik

Labbaik means to change..…so that I live my life for You
Whatever I say and do, from now on will only be to please You
I will give up my evil ways
I will fix my nights and days
Labbaik....Allaahumma Labbaik

But Labbaik also means.......I love you, Ya Rabbee
Nothing else matters....but that You're pleased with me
Its Your Pleasure that I seek
Though I'm so sinful and weak
Labbaik….Allaahumma Labbaik

Head bowed low…. remorseful, and so full of fear
I am ashamed of my life….repentant, as I stand here
Yearning your forgiveness
I cry out in wretchedness…..
Labbaik....Allaahumma Labbaik

Will I change for the better….will I turn my life around?
Or will I succumb to temptation, wherever it is found?
When will I even begin to see
The faults that are so obvious in me?
Labbaik....Allaahumma Labbaik

I'll stop dealing in Ribaa….. I'll give up my Haraam earning
I'll wear the Hijaab…… no more lying , no backbiting
Never a single prayer will I neglect
My character, I'll make…upright and erect
Labbaik....Allaahumma Labbaik

Don't let me become heedless, I pray….Don't make me forget
All the promises I've made, Ya Allaah….all the goals that I've set
Keep me strong, keep me steadfast
Let this feeling in me, forever last
Labbaik....Allaahumma Labbaik

For I don't know when I'll die and I don't know where it will be
But I hope I am in Ihraam….when the Angel comes to me
'Cos if I die so, the Prophet told you and me
when raised from the dead, my first words'll be..…
Labbaik….Allaahumma Labbaik

The Seven Deadly Habits of a DBA…and how to cure them

The Seven Deadly Habits of a DBA…and how to cure them

Calling widespread bad habits in database administration "deadly" may seem extreme. However, when you consider the critical nature of most data, and just how damaging data loss or corruption can be to a corporation, "deadly" seems pretty dead-on.
Although these habits are distressingly common among DBAs, they are curable with some shrewd management intervention. What follows is a list of the seven habits we consider the deadliest, along with some ideas on how to eliminate them.
Habit #1. THE LEAP OF FAITH: "We have faith in our backup."
Blind faith can be endearing, but not when it comes backing up a database. Backups should be trusted only as far as they have been tested and verified.
Cures:
  • Have your DBAs verify that the backup is succeeding regularly, preferably using a script that notifies them if there's an issue.
  • Maintain a backup to your backup. DBAs should always use at least two backup methods. A common technique is to use those old-fashioned exports as a backup to the online backups.
  • Resource test recoveries as often as is practical. An early sign that your DBA team is either overworked or not prioritizing correctly is having a quarter go by without a test recovery. Test recoveries confirm that your backup strategy is on track, while allowing your team to practice recovery activities so they can handle them effectively when the time comes.
Habit #2. GREAT EXPECTATIONS: "It will work the way we expect it to. Let's go ahead."
Although not user friendly in the traditional sense, Oracle is very power-user friendly— once you've been working with it for a while, you develop an instinct for the way things "should" work. Although that instinct is often right, one of the most dangerous habits any DBA can possess is an assumption that Oracle will "just work" the way it should.
Cures:
  • Inculcate a "practice, practice, practice" mentality throughout the organization. DBAs need to rehearse activities in the safe sandbox of a test environment that's designed to closely mimic the behaviour of the production system. The organization needs to allow the time and money for them to do so.
  • Pair inexperienced DBAs with senior ones whenever possible—or take them under your own wing. New DBAs tend to be fearless, but learning from someone else's experience can help instill some much needed paranoia.
  • Review the plans for everything. It's amazing how often DBAs say, "I've done that a hundred times, I don't need a plan." If they're heading into execution mode, they absolutely need a plan.
Habit #3. LAISSEZ-FAIRE ADMINISTRATION: "We don't need to monitor the system. The users always let us know when something's wrong."
If you depend on the users to inform the DBA team that there's a problem, it may already be too late.
Cures:
  • Install availability and performance monitoring systems so that issues are identified and resolved before they cause service-affecting failures.
  • Avoid post-release software issues by working with developers and testers to ensure that all production-ready software is stable and high-performance.
Habit #4. THE MEMORY TEST: "We'll remember how this happened, and what we did to get things going again."
It may seem impossible that a DBA team would forget a massive procedure that took them weeks to get right, and yet it happens all the time. In order to prevent recurring mistakes and take advantage of gained experience, documentation is essential.
Cures:
  • Require that your DBAs maintain a comprehensive documentation library and activity diary, including a significant level of rationale, syntax, and workflow detail.
  • Provide your team with groupware on your intranet so that these documents become searchable in an emergency.
  • Enforce the discipline of documentation and check it periodically. Ask your DBAs: When was this tablespace created, by whom, and with what SQL? What tasks were performed on a particular day? If they can't answer quickly, you'll know they've gone back to relying on memory.
Habit #5. THE BLAME GAME: "Don't look at me, it's the developer's fault that SQL is in production"
Some DBAs have a real "us versus them" mentality when it comes to developers in their organization. They see themselves not as facilitators helping the developers develop quality code from a database standpoint, but rather as guardians who prevent poor-quality code from making it into production. This might seem like semantics, but a confrontational relationship between developers and DBAs results in a lack of developer initiative and significant slowdowns in release cycles.
Cures:
  • Select DBAs who understand it's their responsibility to work as an integrated team with the developers they support.
  • Cultivate a team attitude by structuring continuous DBA involvement in every project rather than at review milestones.
  • Consider assigning an individual DBA in a developer support role. If it's clearly in the job description, there's more motivation to do it well.
Habit #6. THE SOLO ACT: "I know what I'm doing and don't need any help."
Database administration is increasingly complex and even the most senior DBAs can't possibly know every last detail. DBAs have different specialties, which need to be culled and utilized. When DBAs feel like they know, or should know, everything, they don't ask questions and miss out on valuable knowledge they could be gaining from others.
Cures:
  • Foster a teamwork culture where it's acceptable for DBAs to admit they don't know the answer and to ask for help.
  • Encourage your DBAs to seek out an outside peer group as a forum for brainstorming and testing their assumptions. No single person can match the expertise and experience of even a relatively small group.
  • Provide a safety net of tech resources such as reference materials, courses, and outside experts or consultants on call.
Habit #7. TECHNO-LUST: "Things would work so much better if only we had..."
DBAs are often on top of the latest technology, which can help them do a superlative job. But when the desire for new technology causes DBAs to recommend unnecessary hardware purchases or software add-ons, costs tend to skyrocket quickly—as do problems.
Cures:
  • Never upgrade your hardware infrastructure without first exhausting all tuning opportunities. Remember, ten years ago enormous enterprises were run on servers one-tenth the capacity—all thanks to necessity and skill.
  • Never consent to using advanced or new features until you're well aware of the ongoing maintenance commitment and resulting costs.
  • Watch out for DBA support software that presents friendly GUI interfaces for difficult tasks. This type of interface allows a beginner DBA to act as an intermediate DBA under certain circumstances, but simultaneously prevents that beginner from learning the actual skills behind the tasks. Moreover, these tools tend to hide real risks from the DBA, making potentially damaging activities as easy as point-and-click.
Whether it takes a twelve-step program or one tiny adjustment, all of these deadly DBA habits can be kicked. Of course, the first step is recognizing the problem. By starting with this list and doing a careful inventory of the successes and failures in your team's database administration, you'll be well on your way to finding a cure.

Oracle Database Health check scripts

Oracle Database Health check scripts
Health of the Database can be check in various ways. It includes:

Monitoring Scope Current Status OS Level : -
1 Physical memory / Load :Load normal, Load averages: 0.35, 0.37, 0.36
2 OS Space threshold ( archive, ora_dump etc.. ) :Sufficient Space available.
3 Top 10 process consuming memory:No process using exceptional high memory
4 Free volumes available :Sufficient disk space is available on the mount points
5 Filesystem space Under normal threshold
Database level.
6 Check extents / Pro active Space addition:Space is being regularly added.
7 Check alert log for ORA- and warn messages.
8 Major wait events (latch/enqueue/Lib cache pin) No major wait events
9 Max Sessions
10 Long running Jobs 6 inactive sessions running for more than 8 hrs
11 Invalid objects 185
12 Analyze Jobs ( once in a week ) Done on 20-JAN-2008 Time 06:00:06
13 Temp usage / Rollback segment usage Normal
14 Nologging Indexes
15 Hotbackup/Coldbackup Gone fine
16 Redo generation normal
17 PQ proceses Normal
18 I/O Generation Under normal threshold
19 2 PC Pending transactions 0
DR / backup
1 Sync arch Normal
2 Purge arch Normal
3 Recovery status Normal
20)DATABASE HEALTH CHECK SCRIPT: Showing locks and Archive generation details

In Detail DATABASE Health check:
OPERATING SYSTEM:

1)Physical memory/ Load:
1) Free:free command displays amount of total, free and used physical memory (RAM) in the system as well as showing information on shared memory, buffers, cached memory and swap space used by the Linux kernel.
Usage:
$ free -m

2) vmstat:vmstat reports report virtual memory statistics, which has information about processes, swap, free, buffer and cache memory, paging space, disk IO activity, traps, interrupts, context switches and CPU activity
Usage:
$vmstat 5

3) top:top command displays dynamic real-time view of the running tasks managed by kernel and in Linux system. The memory usage stats by top command include real-time live total, used and free physical memory and swap memory with their buffers and cached memory size respectively
Usage:
$top
4) ps :ps command reports a snapshot on information of the current active processes. ps will show the percentage of memory resource that is used by each process or task running in the system. With this command, top memory hogging processes can be identified.
Usage:
$ps aux

2) OS Space threshold ( archive, ora_dump etc.. ):
Checking the OS space is available in all filesystems,specially the location which is having archive logs ,oracle Database files.We can use the below OS commands:
$df –h
$du –csh *
3) Top 10 process consuming memory:
We can Displaying top 10 memory consuming processes as follows:

ps aux|head -1;ps aux|sort -m

We can use the top command, and press M which orders the process list by memory usage.

4) Free volumes available:

We have to make sure Sufficient disk space is available on the mount points on each OS servers where the Database is up and running.

$df –h

5)Filesystem space:

Under normal threshold.Check the filesystem in the OS side whether the sufficient space is available at all mount points.

DATABASE :

6)Check extents / Pro active Space addition:
Check each of the Data,Index and temporary tablespaces for extend and blocks
Allocation details.

SET LINES 1000
SELECT SEGMENT_NAME,TABLESPACE_NAME,EXTENTS,BLOCKS
FROM DBA_SEGMENTS;

SELECT SEGMENT_NAME,TABLESPACE_NAME,EXTENTS,BLOCKS
FROM DBA_SEGMENTS WHERE TABLESPACE_NAME=’STAR01D’;

7) Check alert log for ORA- and warn messages:

Checking the alert log file regulary is a vital task we have to do.In the alert log files we have to looks for the following things:

1) Look for any of the oracle related errors.
Open the alert log file with less or more command and search for ORA-
This will give you the error details and time of occurrence.

2) Look for the Database level or Tablespace level changes
Monitor the alert log file and search the file for each Day activities happening
In the Database either whether it is bouncing of Database.Increase in the size of the tablespaces,Increase in the size of the Database parameters.In the 11g Database we can look for TNS errors in the alert log file.

8) Major wait events (latch/enqueue/Lib cache pin):

We can check the wait events details with the help of below queries:

SELECT s.saddr, s.SID, s.serial#, s.audsid, s.paddr, s.user#, s.username,
s.command, s.ownerid, s.taddr, s.lockwait, s.status, s.server,
s.schema#, s.schemaname, s.osuser, s.process, s.machine, s.terminal,
UPPER (s.program) program, s.TYPE, s.sql_address, s.sql_hash_value,
s.sql_id, s.sql_child_number, s.sql_exec_start, s.sql_exec_id,
s.prev_sql_addr, s.prev_hash_value, s.prev_sql_id,
s.prev_child_number, s.prev_exec_start, s.prev_exec_id,
s.plsql_entry_object_id, s.plsql_entry_subprogram_id,
s.plsql_object_id, s.plsql_subprogram_id, s.module, s.module_hash,
s.action, s.action_hash, s.client_info, s.fixed_table_sequence,
s.row_wait_obj#, s.row_wait_file#, s.row_wait_block#,
s.row_wait_row#, s.logon_time, s.last_call_et, s.pdml_enabled,
s.failover_type, s.failover_method, s.failed_over,
s.resource_consumer_group, s.pdml_status, s.pddl_status, s.pq_status,
s.current_queue_duration, s.client_identifier,
s.blocking_session_status, s.blocking_instance, s.blocking_session,
s.seq#, s.event#, s.event, s.p1text, s.p1, s.p1raw, s.p2text, s.p2,
s.p2raw, s.p3text, s.p3, s.p3raw, s.wait_class_id, s.wait_class#,
s.wait_class, s.wait_time, s.seconds_in_wait, s.state,
s.wait_time_micro, s.time_remaining_micro,
s.time_since_last_wait_micro, s.service_name, s.sql_trace,
s.sql_trace_waits, s.sql_trace_binds, s.sql_trace_plan_stats,
s.session_edition_id, s.creator_addr, s.creator_serial#
FROM v$session s
WHERE ( (s.username IS NOT NULL)
AND (NVL (s.osuser, 'x') <> 'SYSTEM')
AND (s.TYPE <> 'BACKGROUND') AND STATUS='ACTIVE'
)
ORDER BY "PROGRAM";

The following query provides clues about whether Oracle has been waiting for library cache activities:

Select sid, event, p1raw, seconds_in_wait, wait_time
From v$session_wait
Where event = 'library cache pin'
And state = 'WAITING';

The below Query gives details of Users sessions wait time and state:

SELECT NVL (s.username, '(oracle)') AS username, s.SID, s.serial#, sw.event,
sw.wait_time, sw.seconds_in_wait, sw.state
FROM v$session_wait sw, v$session s
WHERE s.SID = sw.SID
ORDER BY sw.seconds_in_wait DESC;

9) Max Sessions:
There should not be more than 6 inactive sessions running for more than 8 hours in a Database in order to minimize the consumption of CPU and I/O resources.

a)Users and Sessions CPU consumption can be obtained by below query:

Set lines 1000
select ss.username, se.SID,VALUE/100 cpu_usage_seconds
from v$session ss, v$sesstat se, v$statname sn
where se.STATISTIC# = sn.STATISTIC#
and NAME like '%CPU used by this session%'
and se.SID = ss.SID and ss.status='ACTIVE'
and ss.username is not null
order by VALUE desc;


b) Users and Sessions CPU and I/O consumption can be obtained by below query:

-- shows Day wise,User wise,Process id of server wise- CPU and I/O consumption
set linesize 140
col spid for a6
col program for a35 trunc
select p.spid SPID,to_char(s.LOGON_TIME,'DDMonYY HH24:MI') date_login,s.username,decode(nvl(p.background,0),1,bg.description, s.program ) program,
ss.value/100 CPU,physical_reads disk_io,(trunc(sysdate,'J')-trunc(logon_time,'J')) days,
round((ss.value/100)/(decode((trunc(sysdate,'J')-trunc(logon_time,'J')),0,1,(trunc(sysdate,'J')-trunc(logon_time,'J')))),2) cpu_per_day
from V$PROCESS p,V$SESSION s,V$SESSTAT ss,V$SESS_IO si,V$BGPROCESS bg
where s.paddr=p.addr and ss.sid=s.sid
and ss.statistic#=12 and si.sid=s.sid
and bg.paddr(+)=p.addr
and round((ss.value/100),0) > 10
order by 8;

10) Long running Jobs:

We can find out long running jobs with the help of the below query:

col username for a20
col message for a50
col remaining for 9999
select username,to_char(start_time, 'hh24:mi:ss dd/mm/yy') started,
time_remaining remaining, message
from v$session_longops
where time_remaining = 0
order by time_remaining desc;

11) Invalid objects:

We can check the invalid objects with the help of the below query:

select owner||' '||object_name||' '||created||' '||status from dba_objects where status='INVALID';

12) Analyze Jobs ( once in a week ):

We need to analyze the jobs that are running once in a week as a golden rule.
The below steps can be considered for analyzing jobs.

Analyzing a Running Job
The status of a job or a task changes several times during its life cycle. A job can have the following as its status:
Scheduled: The job is created and will run at the specified time.
Running: The job is being executed and is in progress.
Initialization Error: The job or step could not be run successfully. If a step in a job fails initialization, the job status is Initialization Error.
Failed: The job was executed but failed.
Succeeded: The job was executed completely.
Stopped: The user canceled the job.
Stop Pending: The user has stopped the job. The already running steps are completing execution.
Suspended: This indicates that the execution of the job is deferred.
Inactive: This status indicates that the target has been deleted.
Reassigned: The owner of the job has changed.
Skipped: The job was not executed at the specified time and has been omitted.
The running jobs can be found out by the help of below query:

select sid, job,instance from dba_jobs_running;

We can find out the failed jobs and Broken jobs details with the help of the Below query:

select job||' '||schema_user||' '||Broken||' '||failures||' '||what||' '||last_date||' '||last_sec from dba_jobs;

13) Temp usage / Rollback segment/PGA usage:

We can get information of temporary tablespace usage details with the help of below query:
Set lines 1000
SELECT b.tablespace,
ROUND(((b.blocks*p.value)/1024/1024),2)||'M' "SIZE",
a.sid||','||a.serial# SID_SERIAL,
a.username,
a.program
FROM sys.v_$session a,
sys.v_$sort_usage b,
sys.v_$parameter p
WHERE p.name = 'db_block_size'
AND a.saddr = b.session_addr
ORDER BY b.tablespace, b.blocks;

We can get information of Undo tablespace usage details with the help of the below query:
set lines 1000
SELECT TO_CHAR(s.sid)||','||TO_CHAR(s.serial#) sid_serial,
NVL(s.username, 'None') orauser,
s.program,
r.name undoseg,
t.used_ublk * TO_NUMBER(x.value)/1024||'K' "Undo"
FROM sys.v_$rollname r,
sys.v_$session s,
sys.v_$transaction t,
sys.v_$parameter x
WHERE s.taddr = t.addr
AND r.usn = t.xidusn(+)
AND x.name = 'db_block_size';

We can get the PGA usage details with the help of the below query:
select st.sid "SID", sn.name "TYPE",
ceil(st.value / 1024 / 1024/1024) "GB"
from v$sesstat st, v$statname sn
where st.statistic# = sn.statistic#
and sid in
(select sid from v$session where username like UPPER('&user'))
and upper(sn.name) like '%PGA%'
order by st.sid, st.value desc;
Enter value for user: STARTXNAPP
14)Validating the Backup:

We have to verify the Hotbackup/Coldbackup(or any physical or logical backup) of all the Production and non-production Databases went fine.Make sure you are having a valid backups of all the Databases.Check the Backup locations to make sure the Backup completed on time with the required Backup data.

14)Hotbackup/Coldbackup:
Validating the backup of Database.It should complete on time with the required data for restoring and recovery purpose if required.

15) Redo generation/Archive logs generation details:
We should make sure there should not be frequent log switch happening in a Database.If there are frequent log switches than archive logs might generate more which may decrease the performance of the Database however in a production Database log switches could vary depending upon the Server configuration between 5 to 20.

We can the log switch details with the help of the below query:

Redolog switch Datewise and hourwise:
-------------------------------
set lines 120;
set pages 999;
select to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
group by to_char(first_time,'DD-MON-RR')
order by 1
/
Archive logs generations is directly proportional to the number of log switches happening in a Database. If there are frequent log switches than archive logs might generate more which can affect the performance of Database.


We can use the below queries for archive logs generation details:

a)Archive logs by dates:
set lines 1000
select to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
group by to_char(first_time,'DD-MON-RR')
order by 1
/
b)Archive log generation details Day-wise :

select to_char(COMPLETION_TIME,'DD-MON-YYYY'),count(*)
from v$archived_log group by to_char(COMPLETION_TIME,'DD-MON-YYYY')
order by to_char(COMPLETION_TIME,'DD-MON-YYYY');

c) Archive log count of the day:

select count(*)
from v$archived_log
where trunc(completion_time)=trunc(sysdate);

count of archived logs generated today on hourly basis:
-------------------------------------------------------
select to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
where to_char(first_time,'DD-MON-RR')='16-AUG-10'
group by to_char(first_time,'DD-MON-RR')
order by 1
/

16)I/O Generation:
We can find out CPU and I/O generation details for all the users in the Database with the help of the below query:
-- Show IO per session,CPU in seconds, sessionIOS.
set linesize 140
col spid for a6
col program for a35 trunc
select p.spid SPID,to_char(s.LOGON_TIME,'DDMonYY HH24:MI') date_login,s.username,decode(nvl(p.background,0),1,bg.description, s.program ) program,
ss.value/100 CPU,physical_reads disk_io,(trunc(sysdate,'J')-trunc(logon_time,'J')) days,
round((ss.value/100)/(decode((trunc(sysdate,'J')-trunc(logon_time,'J')),0,1,(trunc(sysdate,'J')-trunc(logon_time,'J')))),2) cpu_per_day
from V$PROCESS p,V$SESSION s,V$SESSTAT ss,V$SESS_IO si,V$BGPROCESS bg
where s.paddr=p.addr and ss.sid=s.sid
and ss.statistic#=12 and si.sid=s.sid
and bg.paddr(+)=p.addr
and round((ss.value/100),0) > 10
order by 8;
To know what the session is doing and what kind of sql it is using:

-- what kind of sql a session is using
set lines 9999
set pages 9999

select s.sid, q.sql_text from v$sqltext q, v$session s
where q.address = s.sql_address
and s.sid = &sid order by piece;

eg: sid=1853

17)Sync arch:
In a Dataguard environment we have to check primary is in sync with the secondary Database.This we can check as follows:
The V$ MANAGED_STANDBY view on the standby database site shows you the activities performed by
both redo transport and Redo Apply processes in a Data Guard environment
SELECT PROCESS, CLIENT_PROCESS, SEQUENCE#, STATUS FROM V$MANAGED_STANDBY;
In some situations, automatic gap recovery may not take place and you will need to perform gap recovery manually. For example, you will need to perform gap recovery manually if you are using logical standby databases and the primary database is not available.
The following sections describe how to query the appropriate views to determine which log files are missing and perform manual recovery.
On a physical standby database
To determine if there is an archive gap on your physical standby database, query the V$ARCHIVE_GAP view as shown in the following example:
SQL> SELECT * FROM V$ARCHIVE_GAP;

If it displays no rows than the primary Database is in sync with the standy Database.If it display any information with row than manually we have to apply the archive logs.

After you identify the gap, issue the following SQL statement on the primary database to locate the archived redo log files on your primary database (assuming the local archive destination on the primary database is LOG_ARCHIVE_DEST_1):
Eg:
SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND SEQUENCE# BETWEEN 7 AND 10;
Copy these log files to your physical standby database and register them using the ALTER DATABASE REGISTER LOGFILE statement on your physical standby database. For example:
SQL> ALTER DATABASE REGISTER LOGFILE
'/physical_standby1/thread1_dest/arcr_1_7.arc';
SQL> ALTER DATABASE REGISTER LOGFILE
'/physical_standby1/thread1_dest/arcr_1_8.arc';

After you register these log files on the physical standby database, you can restart Redo Apply. The V$ARCHIVE_GAP fixed view on a physical standby database only returns the next gap that is currently blocking Redo Apply from continuing. After resolving the gap and starting Redo Apply, query the V$ARCHIVE_GAP fixed view again on the physical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.

On a logical standby database:
To determine if there is an archive gap, query the DBA_LOGSTDBY_LOG view on the logical standby database. For example, the following query indicates there is a gap in the sequence of archived redo log files because it displays two files for THREAD 1 on the logical standby database. (If there are no gaps, the query will show only one file for each thread.) The output shows that the highest registered file is sequence number 10, but there is a gap at the file shown as sequence number 6:
SQL> COLUMN FILE_NAME FORMAT a55
SQL> SELECT THREAD#, SEQUENCE#, FILE_NAME FROM DBA_LOGSTDBY_LOG L
2> WHERE NEXT_CHANGE# NOT IN
3> (SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L.THREAD# = THREAD#)
4> ORDER BY THREAD#,SEQUENCE#;

THREAD# SEQUENCE# FILE_NAME
---------- ---------- -----------------------------------------------
1 6 /disk1/oracle/dbs/log-1292880008_6.arc
1 10 /disk1/oracle/dbs/log-1292880008_10.arc

Copy the missing log files, with sequence numbers 7, 8, and 9, to the logical standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE statement on your logical standby database. For example:
SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/disk1/oracle/dbs/log-1292880008_10.arc';

After you register these log files on the logical standby database, you can restart SQL Apply.

The DBA_LOGSTDBY_LOG view on a logical standby database only returns the next gap that is currently blocking SQL Apply from continuing. After resolving the identified gap and starting SQL Apply, query the DBA_LOGSTDBY_LOG view again on the logical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.
Monitoring Log File Archival Information:
Step 1 Determine the current archived redo log file sequence numbers.
Enter the following query on the primary database to determine the current archived redo log file sequence numbers:
SQL> SELECT THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$LOG
WHERE STATUS='CURRENT';
Step 2 Determine the most recent archived redo log file.
Enter the following query at the primary database to determine which archived redo log file contains the most recently transmitted redo data:
SQL> SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;
Step 3 Determine the most recent archived redo log file at each destination.
Enter the following query at the primary database to determine which archived redo log file was most recently transmitted to each of the archiving destinations:
SQL> SELECT DESTINATION, STATUS, ARCHIVED_THREAD#, ARCHIVED_SEQ#
2> FROM V$ARCHIVE_DEST_STATUS
3> WHERE STATUS <> 'DEFERRED' AND STATUS <> 'INACTIVE';

DESTINATION STATUS ARCHIVED_THREAD# ARCHIVED_SEQ#
------------------ ------ ---------------- -------------
/private1/prmy/lad VALID 1 947
standby1 VALID 1 947
The most recently written archived redo log file should be the same for each archive destination listed. If it is not, a status other than VALID might identify an error encountered during the archival operation to that destination.
Step 4 Find out if archived redo log files have been received.
You can issue a query at the primary database to find out if an archived redo log file was not received at a particular site. Each destination has an ID number associated with it. You can query the DEST_ID column of the V$ARCHIVE_DEST fixed view on the primary database to identify each destination's ID number.
Assume the current local destination is 1, and one of the remote standby destination IDs is 2. To identify which log files are missing at the standby destination, issue the following query:
SQL> SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM
2> (SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1)
3> LOCAL WHERE
4> LOCAL.SEQUENCE# NOT IN
5> (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND
6> THREAD# = LOCAL.THREAD#);

THREAD# SEQUENCE#
--------- ---------
1 12
1 13
1 14

18)Purge arch:
We have to make sure the archive logs files are purged safely or move to Tape drive or any other location in order to make space for new archive logs files in the Archive logs destination locations.

19)Recovery status:
In order to do recover make sure you are having latest archive logs,so that you can restore and do the recovery if required.


20) MY DATABASE HEALTH CHECK SCRIPT:
/* SCRIPT FOR MONITORING AND CHECKING HEALTH OF DATABASE-USEFUL FOR PRODUCTION DATABASES */

-- SHOWS RUNNING JOBS
select 'RUNNING JOBS', sid, job,instance from dba_jobs_running;
set lines 1000
-- SHOWS ARCHIVE LOGS GENERAION DETAILS HOURLY AND DATE WISE BASIS
select 'ARCHIVE LOG REPORT',to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
group by to_char(first_time,'DD-MON-RR')
order by 1
/
-- WHAT ALL THE SESSIONS ARE GETTING BLOCKED
select 'SESSIONS BLOCKED',process,sid, blocking_session from v$session where blocking_session is not null;
-- WHICH SESSION IS BLOCKING WHICH SESSION
set lines 9999
set pages 9999
select s1.username || '@' || s1.machine
|| ' ( SID=' || s1.sid || ' ) is blocking '
|| s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status
from v$lock l1, v$session s1, v$lock l2, v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid
and l1.BLOCK=1 and l2.request > 0
and l1.id1 = l2.id1
and l2.id2 = l2.id2 ;
-- SHOWS BLOCK CHANGES DETAILS AND PHYSICAL READS DETAIL
select a.sid,b.username,block_gets,consistent_gets,physical_reads,block_changes
from V$SESS_IO a,V$SESSION b
where a.sid=b.sid and block_changes > 10000 order by block_changes desc;
-- show IO per session / CPU in seconds. sessionIOS.sql
set linesize 140
col spid for a6
col program for a35 trunc
select p.spid SPID,to_char(s.LOGON_TIME,'DDMonYY HH24:MI') date_login,s.username,decode(nvl(p.background,0),1,bg.description, s.program ) program,ss.value/100 CPU,physical_reads disk_io,(trunc(sysdate,'J')-trunc(logon_time,'J')) days,round((ss.value/100)/(decode((trunc(sysdate,'J')-trunc(logon_time,'J')),0,1,(trunc(sysdate,'J')-trunc(logon_time,'J')))),2) cpu_per_day
from V$PROCESS p,V$SESSION s,V$SESSTAT ss,V$SESS_IO si,V$BGPROCESS bg
where s.paddr=p.addr and ss.sid=s.sid
and ss.statistic#=12 and si.sid=s.sid
and bg.paddr(+)=p.addr
and round((ss.value/100),0) > 10
order by 8;
-- SCRIPT TO IDENTIFY LONG RUNNING STATEMENTS
rem LONGOPS.SQL
rem Long Running Statements
rem Helmut Pfau, Oracle Deutschland GmbH
set linesize 120
col opname format a20
col target format a15
col units format a10
col time_remaining format 99990 heading Remaining[s]
col bps format 9990.99 heading [Units/s]
col fertig format 90.99 heading "complete[%]"
select sid,
opname,
target,
sofar,
totalwork,
units,
(totalwork-sofar)/time_remaining bps,
time_remaining,
sofar/totalwork*100 fertig
from v$session_longops
where time_remaining > 0
/
-- ACTIVE SESSIONS IN DATABASE
select 'ACTIVE SESSION', sid, serial#,machine, osuser,username,status from v$session where username!='NULL' and status='ACTIVE';
-- WHAT SQL A SESSION IS USING
set lines 9999
set pages 9999
select s.sid, q.sql_text from v$sqltext q, v$session s
where q.address = s.sql_address
and s.sid = &sid order by piece;

eg:SID=1844
I would like to add one more script which will tell me details regarding the Size of the Database used,occupied and available and Tablespace usage
details along with hit ratio of various SGA components which can be very helpfull
to monitor the performance of the Databases.

Database_monitor.sql:

ttitle "1. :============== Tablespace Usage Information ==================:" skip 2
set linesize 140
col Total format 99999.99 heading "Total space(MB)"
col Used format 99999.99 heading "Used space(MB)"
col Free format 99999.99 heading "Free space(MB)"
break on report
compute sum of Total space(MB) on report
compute sum of Used space(MB) on report
compute sum of Free space(MB) on report
select a.tablespace_name, round(a.bytes/1024/1024,2) Total,
round( nvl( b.bytes,0)/1024/1024,2) Used,
round(nvl(c.bytes, 0)/1024/1024,2) Free ,
round(nvl(b.bytes,0)*100/nvl(a.bytes,0),2) "% Used"
from sys.sm$ts_avail a, sys.sm$ts_used b, sys.sm$ts_free c
where a.tablespace_name=b.tablespace_name(+)
and b.tablespace_name=c.tablespace_name(+);

ttitle "2. :============== Hit Ratio Information ==================:" skip 2
set linesize 80
clear columns
clear breaks
set pagesize 60 heading off termout off echo off verify off
REM
col val1 new_val lib noprint
select 100*(1-(SUM(Reloads)/SUM(Pins))) val1
from V$LIBRARYCACHE;

ttitle off
col val2 new_val dict noprint
select 100*(1-(SUM(Getmisses)/SUM(Gets))) val2
from V$ROWCACHE;

ttitle off
col val3 new_val phys_reads noprint
select Value val3
from V$SYSSTAT
where Name = 'physical reads';

ttitle off
col val4 new_val log1_reads noprint
select Value val4
from V$SYSSTAT
where Name = 'db block gets';

ttitle off
col val5 new_val log2_reads noprint
select Value val5
from V$SYSSTAT
where Name = 'consistent gets';

ttitle off
col val6 new_val chr noprint
select 100*(1-(&phys_reads / (&log1_reads + &log2_reads))) val6
from DUAL;

ttitle off
col val7 new_val avg_users_cursor noprint
col val8 new_val avg_stmts_exe noprint
select SUM(Users_Opening)/COUNT(*) val7,
SUM(Executions)/COUNT(*) val8
from V$SQLAREA;
ttitle off
set termout on
set heading off
ttitle -
center 'SGA Cache Hit Ratios' skip 2

select 'Data Block Buffer Hit Ratio : '||&chr db_hit_ratio,
' Shared SQL Pool ',
' Dictionary Hit Ratio : '||&dict dict_hit,
' Shared SQL Buffers (Library Cache) ',
' Cache Hit Ratio : '||&lib lib_hit,
' Avg. Users/Stmt : '||
&avg_users_cursor||' ',
' Avg. Executes/Stmt : '||
&avg_stmts_exe||' '
from DUAL;

ttitle "3. :============== Sort Information ==================:" skip 2

select A.Value Disk_Sorts,
B.Value Memory_Sorts,
ROUND(100*A.Value/
DECODE((A.Value+B.Value),0,1,(A.Value+B.Value)),2)
Pct_Disk_Sorts
from V$SYSSTAT A, V$SYSSTAT B
where A.Name = 'sorts (disk)'
and B.Name = 'sorts (memory)';

ttitle "4. :============== Database Size Information ==================:" skip 2


select sum(bytes/1024/1024/1024) Avail from sm$ts_avail union all select sum(bytes/1024/1024/1024) Used from sm$ts_used union all select sum(bytes/1024/1024/1024) Free from sm$ts_free;


Hope this helps you in monitoring your Databases.

Friday, October 21, 2011

Oracle Business Intelligence Applications Version 7.9.6.x Performance Recommendations

Oracle Business Intelligence Applications Version 7.9.6.x Performance Recommendations [ID 870314.1]

--------------------------------------------------------------------------------

Modified 12-MAY-2011 Type BULLETIN Status PUBLISHED

In this Document
Purpose
Scope and Application
Oracle Business Intelligence Applications Version 7.9.6.x Performance Recommendations



--------------------------------------------------------------------------------



Applies to:
Business Intelligence Applications Consumer - Version: 7.9.6 [1340] to 7.9.6.2 [AA 1830] - Release: V7 to V7
Information in this document applies to any platform.

Purpose
This article discusses performance topics for Oracle Business Intelligence Applications version 7.9.6.x.

Scope and Application
The information contained in this bulletin is applicable to Oracle Business Intelligence Applications version 7.9.6.x and is intended for Oracle Business Intelligence Administrators, DBAs and Applications implementers. It covers advanced performance tuning techniques in Informatica and Oracle RDMS, so all recommendations must be carefully verified in a test enviornment before applying to a production instance.

Please note similar detailed performance recommendation for Teradata are expected in next 12 months. For other databases like DB2, SQL Server detailed performance recommendation will be published in future but no timeframe is available. Also please note General /Key Performance Guideliness/Parameters for Teradata, DB2 and SQL-Server are included in Oracle Business Intelligence Applications Installation Guide for Informatica PowerCenter Users.

Oracle Business Intelligence Applications Version 7.9.6.x Performance Recommendations
Oracle Business Intelligence (BI) Applications Version 7.9.6.x delivers a number of adapters to various business applications on Oracle database. 7.9.6.x version is certified with other major data warehousing platforms. Each Oracle BI Applications implementation requires very careful planning to ensure the best performance both during ETL and web queries or dashboard execution.


This article discusses performance topics for Oracle BI Applications 7.9.6 and higher using Informatica PowerCenter 8.6.x ETL platform. It is intended for Oracle BI Administrators, DBAs and Applications implementers. Customers are encouraged to engage Oracle Expert Services to review their configurations prior to implementing the recommendations to their BI Application environments.


Please review the attached document for more details.


Attachments



--------------------------------------------------------------------------------
7962 Performance Technote (629.08 KB)


Related



--------------------------------------------------------------------------------
Products
--------------------------------------------------------------------------------

•Middleware > Business Intelligence > Oracle Business Intelligence Applications > Business Intelligence Applications Consumer
Keywords
--------------------------------------------------------------------------------
BUSINESS INTELLIGENCE; INTELLIGENCE; PERFORMANCE TUNING; WAREHOUSES




Back to top

Copyright (c) 2007, 2010, Oracle. All rights reserved. Legal Notices and Terms of Use | Privacy Statement Rate this document
Article Rating Rate this document
Excellent
Good
Poor

Did this document help you?
Yes
No
Just browsing

How easy was it to find this document?
Very easy
Somewhat easy
Not easy
CommentsProvide feedback for this article. Please use 'Contact Us' for other feedback. Important Note: this feedback may be anonymously visible to other customers until processed by Oracle Support.


Cancel

Supported System Configurations and Limits for Oracle Linux Releases [ID 300652.1]

Supported System Configurations and Limits for Oracle Linux Releases [ID 300652.1]

--------------------------------------------------------------------------------

Modified 15-FEB-2011 Type BULLETIN Status PUBLISHED

In this Document
Purpose
Scope and Application
Supported System Configurations and Limits for Oracle Linux Releases
References



--------------------------------------------------------------------------------



Applies to:
Linux OS - Version: Enterprise Linux 3.0 to Oracle Linux 6.0 with Unbreakable Enterprise Kernel [2.6.32] - Release: RHEL3 to OL6
Linux x86
Linux x86-64
Linux Itanium

Purpose
Supported System Configurations and Limits (Memory, CPU, Disk imitations with maximums and minimums) for Oracle Linux 3, 4, 5, 6 and the Unbreakable Enterprise Kernel.
Scope and Application
This document is informational and intended for any user using Oracle Linux.
Supported System Configurations and Limits for Oracle Linux Releases
Architectures Supported
Arch \ Version 3
4
5
5+Unbreakable Enterprise Kernel
6 (9)

x86 (1)
Yes
Yes
Yes
No
Yes

x86_64 (2)
Yes
Yes
Yes
Yes (7)
Yes

ia64
No
Yes (4)
Yes (6)
No
No


Maximum Logical CPUs (3)
Arch \ Version
3
4
5
6

x86
16
4
5
32

x86_64
4
Unlimited (8)
Unlimited (8)
Unlimited

ia64
N/A
Unlimited (8)
Unlimited (8)
N/A


Maximum Physical Memory (RAM)
Arch \ Version
3
4
5
5+Unbreakable Enterprise Kernel
6 (9)

x86
64GB
64GB
16GB (5)
N/A
16GB (5)

x86_64
64GB
256GB / 1TB (10)
256GB / 1TB (10)
2TB
2TB

ia64
N/A
2 TB
2 TB
N/A
N/A


Maximum Default File System Size
Arch \ Version
3 (ext3)
4 (ext3)
5 (ext3)
6 (ext3/ext4)

x86
1TB
8TB
16TB (Update 1)
16TB

x86_64
1TB
8TB
16TB (Update 1)
16TB

ia64
N/A
8TB
8TB
N/A


Maximum x86 (32-bit) Address Space
Arch \ Version
3
4
5
6

x86
approx. 4 GB
approx. 4 GB
approx. 3 GB (5)
approx. 3 GB (5)


Minimum Memory (RAM)
Arch \ Version
3
4
5
6

x86
256MB
256MB
512MB
512MB

x86_64
512MB
512MB
512MB
1GB

ia64
N/A
512MB
512MB
N/A


Minimum Disk Space
Arch \ Version
3
4
5
6

x86
800 MB
800 MB
1GB
1GB

x86_64
800 MB
800 MB
1GB
1GB

ia64
N/A
800 MB
1GB
N/A




(1) Intel 32-bit Systems in the tables above.
(2) AMD64, Intel EM64T 64-bit Systems in the tables above.
(3) Multi-core and/or hyperthreading CPUs count as a single CPU in the table above.
(4) Update 7 is available as of August 1, 2008
(5) Limit low because the "hugemem" kernel is not available
(6) Update 4 is available as of April, 2010
(7) Unbreakable Enterprise Kernel is available over Oracle Linux 5.5 and up
(8) With latest update level
(9) With Oracle Linux 6 the default kernel is Unbreakable Enterprise Kernel on x86_64 and RHEL compatible kernel with x86 (32-bit). The limits are provided based on that.
(10) AMD64 / Intel64: On those kernels larger memory can be handled with Intel CPUs




--------------------------------------------------------------------------------


References
http://www.oracle.com/us/technologies/linux/025994.htm

Related



--------------------------------------------------------------------------------
Products
--------------------------------------------------------------------------------

•Oracle Linux and Virtualization > Oracle Linux > Operating System > Linux OS
Keywords
--------------------------------------------------------------------------------
ITANIUM; INTEL; KERNEL; X86; RAM; ENTERPRISE; RHEL; LINUX




Back to top

Copyright (c) 2007, 2010, Oracle. All rights reserved. Legal Notices and Terms of Use | Privacy Statement Rate this document
Article Rating Rate this document
Excellent
Good
Poor

Did this document help you?
Yes
No
Just browsing

How easy was it to find this document?
Very easy
Somewhat easy
Not easy
CommentsProvide feedback for this article. Please use 'Contact Us' for other feedback. Important Note: this feedback may be anonymously visible to other customers until processed by Oracle Support.


Cancel

Monday, October 17, 2011

Step By Step Guide to Creating a Custom Application in Applications 11i

Modified 05-MAY-2011 Type WHITE PAPER Status PUBLISHED

Applies to: Oracle Application Object Library - Version: 11.5.10.0 to 11.5.10.2 [Release: 11.5.10 to 11.5.10]Information in this document applies to any platform.*************************************************************This article is being delivered in Draft form and may containerrors. Please use the MetaLink "Feedback" button to adviseOracle of any issues related to this article.*************************************************************
AbstractPURPOSE-------This note describes the basic steps needed to setup a Custom Application within Oracle Applications 11i.
Step By Step Guide to Creating a Custom Application in Applications 11i**************************************************************************************************************************************************************IMPORTANT NOTE - This note is retained for reference only. You should now follow note 270519.1 "Customizing an AutoConfig Environment" to make changes to your system configuration**************************************************************************************************************************************************************
Creating a Custom Application in Applications 11i-------------------------------------------------Custom Applications are required if you are creating new forms, reports, etc. This allows you to segregate your custom written files from the standard seeded functionality that Oracle Applications provide. Customizations can therefore be preserved when applying patches or upgrades to your environment.
1) Make the directory structure for your custom application files.cd $APPL_TOPmkdir xxmzmkdir xxmz/11.5.0mkdir xxmz/11.5.0/adminmkdir xxmz/11.5.0/admin/sqlmkdir xxmz/11.5.0/admin/odfmkdir xxmz/11.5.0/sqlmkdir xxmz/11.5.0/binmkdir xxmz/11.5.0/reportsmkdir xxmz/11.5.0/reports/USmkdir xxmz/11.5.0/formsmkdir xxmz/11.5.0/forms/USmkdir xxmz/11.5.0/$APPLLIBmkdir xxmz/11.5.0/$APPLOUTmkdir xxmz/11.5.0/$APPLLOG



2) Add the custom module into the environmentApply ADX.E.1 and add the entry to topfile.txt as a standard product top entry (follow the existing model in the file)Customised environment variables can be added to AutoConfig by using the filename specificed by s_custom_file, which is then called from the APPSORA.env file.If using Forms Listener Servlet, you may also need to add $CUSTOM_TOP to formsservlet.ini in $APACHE_TOP/Jserv/etc



3) Create new tablespace for database objectscreate tablespace xxmz datafile '/emea/oracle/visuk09/visuk09data/xxmz.dbf' size 10M default storage(initial 10k next 10k)



4) Create schemacreate user xxmz identified by xxmzdefault tablespace xxmztemporary tablespace tempquota unlimited on xxmzquota unlimited on temp;grant connect, resource to xxmz;



5) Register your Oracle Schema.Login to Applications with System Administrator responsibilityNavigate to Application-->RegisterApplication = xxmz CustomShort Name = xxmzBasepath = xxmz_TOPDescription = xxmz Custom Application
6) Register Oracle UserNaviate to Security-->Oracle-->RegisterDatabase User Name = xxmzPassword = xxmzPrivilege = EnabledInstall Group = 0Description = xxmz Custom Application User
7) Add Application to a Data GroupNavigate to Security-->Oracle-->DataGroupData Group = xxmzGroupDescription = xxmz Custom Data GroupClick on "Copy Applications from" and pick Standard data Group, then add the following entry.Application = xxmz CustomOracle ID = APPSDescription = xxmz Custom Application
8) Create custom request groupThis will act as a placeholder for any custom reports we wish to make available for the Custom Responsibility (which is defined at a later stage)Navigate to Security-->responsibility-->RequestGroup = xxmz Request GroupApplication = xxmz CustomCode = xxmzDescription = xxmz Custom RequestsWe will not define any requests to add to the group at this stage, but you can add some now if required.
9) Create custom menuThis will act as a placeholder for any menu items we wish to make available for the Custom Responsibility (which is defined at a later stage) We will create two menus, one for Core Applications and one for Self Service.Navigate to Application-->MenuMenu = xxmz_CUSTOM_MENUUser Menu Name = xxmz Custom ApplicationMenu Type =Description = xxmz Custom Application MenuSeq = 100Prompt = View RequestsSubmenu =Function = View All Concurrent RequestsDescription = View RequestsSeq = 110Prompt = Run RequestsSubmenu =Function = Requests: SubmitDescription = Submit RequestsMenu = xxmz_CUSTOM_MENU_SSWAUser Menu Name = xxmz Custom Application SSWAMenu Type =Description = xxmz Custom Application Menu for SSWA
10) Create new responsibility. One for Core Applications and One for Self Service (SSWA)Navigate to Security-->Responsibility-->DefineResponsibility Name = xxmz CustomApplication = xxmz CustomResponsibility Key = xxmzCUSTOMDescription = xxmz Custom ResponsibilityAvailable From = Oracle ApplicationsData Group Name = xxmzGroupData Group Application = xxmz CustomMenu = xxmz Custom ApplicationRequest Group Name = xxmz Request GroupResponsibility Name = xxmz Custom SSWAApplication = xxmz CustomResponsibility Key = xxmzCUSTOMSSWADescription = xxmz Custom Responsibility SSWAAvailable From = Oracle Self Service Web ApplicationsData Group Name = xxmzGroupData Group Application = xxmz CustomMenu = xxmz Custom Application SSWARequest Group Name = xxmz Request Group
11) Add responsibility to userNavigate to Security-->User-->DefineAdd xxmz Custom responsibility to users as required.
12) Other considerationsYou are now ready to create your database Objects, custom Reports, Forms, Packages, etcCreate the source code files in the xxmz_TOP directory appropriate for the type of object. For example forms would be located in $xxmz_TOP/forms/US or package source code in $xxmz_TOP/admin/sql for example.Database Objects, such as tables, indexes and sequences should be created in the xxmz schema, then you need to
a) Grant all privilege from each custom data object to the APPS schema.For example : logged in as xxmz usergrant all privileges on myTable to apps;
b) Create a synonym in APPS for each custom data objectFor example : logged in as APPS usercreate synonym myTable for xxmz.myTable;Other database objects, such as views and packages should be created directly in the APPS schema.RELATED DOCUMENTS-----------------Oracle Applications Release 11i Developers Guide.Oracle Applications Release 11i System Administrators Guide.Additional Search Words-----------------------11i custom customization core apps

How to Gather Statistics on Custom Schemas for Ebusiness Suite 11i and R12?


How to Gather Statistics on Custom Schemas for Ebusiness Suite 11i and R12? [ID 1065813.1]

Modified 21-FEB-2010 Type HOWTO Status PUBLISHED

Applies to: Oracle Application Object Library - Version: 11.5.10 to 12.1 - Release: 11.5 to 12Information in this document applies to any platform.
GoalGather Statistics for Oracle Ebusiness Suite 11i or R12 on custom schema.
SolutionTo gather statistics for Oracle Applications Ebusiness Suite 11i or R12 modules you should use the concurrent programs 'Gather Schema Statistics', 'Gather Table Statistics', or run 'FND_STATS'.In order to run the programs 'Gather Schema Statistics', 'Gather Table Statistics', or run 'FND_STATS' for a custom schema, the schema must be registered with the Application.To register an schema with Oracle Applications, follow the steps listed inNote 176852.1 - Integrating Custom Applications with Oracle Applications If the custom schema is not registered, it will not show up in the LOV for schema selection for the mentioned concurrent programs.You can run the following statement to see which schemas are currently registered with the Ebusiness Suite:
select distinct(upper(oracle_username)) snamefrom fnd_oracle_userid a, fnd_product_installations bwhere a.oracle_id = b.oracle_idorder by sname;

Thursday, October 13, 2011

Diagnostic Script for Concurrent Manager - - - - - - - - - - - - - - - - Code begins here - - - - - - - - - - - - - - - spool ccm.lst prompt Step 1 Checking how many rows in FND_CONCURRENT_REQUEST. select count(*) from fnd_concurrent_requests; prompt ----------------------------------------- prompt Step 2 Checking how many rows in FND_CONCURRENT_PROCESSES table. select count(*) from fnd_concurrent_processes; prompt ----------------------------------------------- prompt Step 3 Checking sys.dual table which should have one and only one row. select count(*) from sys.dual; prompt If you have more than one row in sys.dual, please delete it prompt sql> delete rownum from SYS.DUAL; Prompt rownum= the row number to delete prompt prompt prompt --------------------------------------------- prompt Step 4 Checking fnd_dual. There must be at lest one row: select count(*) from fnd_dual; prompt If there are no record selected, prompt Update fnd_dual table to have at lest one record prompt ---------------------------------------------- prompt Step 5 Checking the Internal Manager queue name "FNDICM" which should be=1 select concurrent_queue_id from fnd_concurrent_queues where concurrent_queue_name='FNDICM'; prompt ---------------------------------------------- prompt Step 6 Checking for Active processes under the Internal Manager queue prompt in fnd_concurrent_proceses table: prompt select a.concurrent_queue_name , substr(b.os_process_id,0,10) "OS Proc" , b.oracle_process_id "Oracle ID" , b.process_status_code from fnd_concurrent_queues a , fnd_concurrent_processes b where a.concurrent_queue_id=b.concurrent_queue_id and a.concurrent_queue_name='FNDICM' and b.process_status_code='A' order by b.process_status_code; prompt If any rows found with process_status_code with value = 'A' (= Active) prompt The internal Manager will not start up ,so to avoide this issue prompt update these rows to have process_status_code value ='K'(terminated) prompt prompt ----------------------------------------- prompt Step 7 Checking for Active processes under the Standard Manager queue prompt in fnd_concurrent_proceses table: prompt select a.concurrent_queue_name , substr(b.os_process_id,0,10) "OS Proc" , b.oracle_process_id "Oracle ID" , b.process_status_code from fnd_concurrent_queues a , fnd_concurrent_processes b where a.concurrent_queue_id=b.concurrent_queue_id and a.concurrent_queue_name='STANDARD' and b.process_status_code='A' order by b.process_status_code; prompt If any rows found with process_status_code with value = 'A' (= Active) prompt The internal Manager will not start up ,so to avoide this issue prompt update these rows to have process_status_code value ='K'(terminated) prompt prompt ------------------------------------------ prompt Step 8 Checking for Active processes under the Conflict Manager queue prompt in fnd_concurrent_proceses table: prompt select a.concurrent_queue_name , substr(b.os_process_id,0,10) "OS Proc" , b.oracle_process_id "Oracle ID" , b.process_status_code from fnd_concurrent_queues a , fnd_concurrent_processes b where a.concurrent_queue_id=b.concurrent_queue_id and a.concurrent_queue_name='FNDCRM' and b.process_status_code='A' order by b.process_status_code; prompt If any rows found with process_status_code with value = 'A' (= Active) prompt The internal Manager will not start up ,so to avoide this issue prompt update these rows to have process_status_code value ='K'(terminated) prompt prompt --------------------------------------------------- prompt Step 9 Checking Actual and Target Processes for Internal Manager: select MAX_PROCESSES,RUNNING_PROCESSES from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME='FNDICM'; prompt If the MAX_PROCESSES=RUNNING_PROCESSES that means the manager is UP. prompt prompt -------------------------------------------------------- prompt Step 10 Checking Actual and Target Processes for the Standard Manager: select MAX_PROCESSES,RUNNING_PROCESSES from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME='STANDARD'; prompt If the MAX_PROCESSES=RUNNING_PROCESSES that means the manager is UP. prompt prompt --------------------------------------------------------- prompt Step 11 Checking Actual and Target Processes for Conflict Resolution Manager: select MAX_PROCESSES,RUNNING_PROCESSES from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME='FNDCRM'; prompt If the MAX_PROCESSES=RUNNING_PROCESSES that means the manager is UP. prompt prompt --------------------------------------------------------- Prompt Step 12 Checking if the control_code set to 'N': select control_code from fnd_concurrent_queues where control_code='N'; prompt prompt If any rows selected, please update the table fnd_concurrent_queues: prompt Update fnd_concurrent_queues set control_code = null prompt where control_code ='N'; PROMPT Update fnd_concurrent_queues set target_node = null; PROMPT commit; prompt prompt -------------------------------- PROMPT Step 13 Checking terminated processes: PROMPT select count (*) from fnd_concurrent_requests where status_code='T'; prompt prompt If you have terminated processes run the following sql statement: prompt prompt SQL> Update fnd_concurrent_requests prompt set status_code = 'E', phase_code = 'C' prompt where status_code = 'T'; prompt ------------------------------------------ prompt Step 14 Checking pending requests: select count(*) from fnd_concurrent_requests where status_code='P'; prompt If any rows selected please run the following sql statement: prompt SQL> Update fnd_concurrent_requests prompt set status_code = 'E', phase_code = 'C' prompt where status_code = 'P'; prompt ------------------------------------------------------ prompt Step 15 Checking Running processes: prompt select count (*) from fnd_concurrent_requests where status_code='R'; prompt prompt If you have Running processes run the following sql statement prompt SQL> Update fnd_concurrent_requests prompt set status_code = 'E', phase_code = 'C' prompt where status_code = 'R'; prompt ------------------------------------------ prompt Step 16 Checking the PMON method, which should be set to LOCK: prompt select profile_option_id , profile_option_value from FND_PROFILE_OPTION_VALUES where profile_option_id= (select profile_option_id from FND_PROFILE_OPTIONS where profile_option_name='CONC_PMON_METHOD'); prompt prompt If the PROFILE_OPTION_VALUE was't LOCK please prompt Reset PMON to LOCK by running afimpmon.sql script(The manager should be down) prompt 1-At UNIX command prompt: prompt 2-cd $FND_TOP/sql prompt 3-Log into SQLPLUS as apps/ prompt SQL> @afimpmon.sql prompt prompt1:dual prompt prompt2:LOCK (LOCK MUST BE ALL UPPERCASE) prompt For Oracle Applications Release 11.5, when you check the PMON prompt Method you may get no rows selected which is normal, prompt because in apps 11.5 the Pmon Method is hard coded to Lock at prompt the Operating System level. prompt prompt ------------------------------------------------------- prompt Step-17 Checking how many FNDLIBR processes are running: prompt -For Unix :From unix command prompt $ ps -ef grep -i fndlibr prompt If you have any FNDLIBR processes running,please kill them before prompt starting or shuting down the internal manager prompt prompt prompt -For NT, through Task Manager, check the entries under the Processes tab for FNDLIBR.exe processes. prompt If there are any, Highlight and click [End Process] button to kill processes prompt ---------------------------------------------------------- prompt Step-18 Checking how many "FND_%"invalid objects: select substr(owner,1, 12) owner, substr(object_type,1,12) type, substr(status,1,8) status, substr(object_name, 1, 25) name from dba_objects where object_name like 'FND_%' and status='INVALID'; prompt If you have any invalied objects please see note#113947.1 via Metalink prompt -------------------------------------------------------------- prompt Step-19-How to find the PID in the O/S for request_id: prompt If you do not like to check this enter any number then click Enter to Exit select r.request_id, p.os_process_id from FND_CONCURRENT_REQUESTS r,FND_CONCURRENT_PROCESSES p where r.controlling_manager = p.concurrent_process_id and request_id=&request_id; prompt prompt Please upload the "ccm.lst" output to Support, Thanks. prompt spool off


Diagnostic Script for Concurrent Manager



spool ccm.lst
prompt Step 1 Checking how many rows in FND_CONCURRENT_REQUEST.
select count(*) from fnd_concurrent_requests;
prompt
-----------------------------------------


prompt Step 2 Checking how many rows in FND_CONCURRENT_PROCESSES table.
select count(*) from fnd_concurrent_processes;
prompt
-----------------------------------------------




prompt Step 3 Checking sys.dual table which should have one and only one row.
select count(*) from sys.dual;


prompt If you have more than one row in sys.dual, please delete it


prompt sql> delete rownum from SYS.DUAL;Prompt rownum= the row number to delete
prompt
prompt
prompt
---------------------------------------------




prompt Step 4 Checking fnd_dual. There must be at lest one row:


select count(*) from fnd_dual;


prompt If there are no record selected,
prompt Update fnd_dual table to have at lest one record
prompt
----------------------------------------------


prompt Step 5 Checking the Internal Manager queue name "FNDICM" which should be=1


select concurrent_queue_id from fnd_concurrent_queues
where concurrent_queue_name='FNDICM';



prompt
----------------------------------------------


prompt Step 6 Checking for Active processes under the Internal Manager queue
prompt in fnd_concurrent_proceses table:
prompt


select a.concurrent_queue_name
, substr(b.os_process_id,0,10) "OS Proc"
, b.oracle_process_id "Oracle ID"
, b.process_status_code
from fnd_concurrent_queues a
, fnd_concurrent_processes b
where a.concurrent_queue_id=b.concurrent_queue_id
and a.concurrent_queue_name='FNDICM'
and b.process_status_code='A'
order by b.process_status_code;



prompt If any rows found with process_status_code with value = 'A' (= Active)
prompt The internal Manager will not start up ,so to avoide this issue
prompt update these rows to have process_status_code value ='K'(terminated)
prompt
prompt
-----------------------------------------
prompt Step 7 Checking for Active processes under the Standard Manager queueprompt in fnd_concurrent_proceses table:
prompt


select a.concurrent_queue_name
, substr(b.os_process_id,0,10) "OS Proc"
, b.oracle_process_id "Oracle ID"
, b.process_status_code
from fnd_concurrent_queues a
, fnd_concurrent_processes b
where a.concurrent_queue_id=b.concurrent_queue_id
and a.concurrent_queue_name='STANDARD'
and b.process_status_code='A'
order by b.process_status_code;



prompt If any rows found with process_status_code with value = 'A' (= Active)
prompt The internal Manager will not start up ,so to avoide this issue
prompt update these rows to have process_status_code value ='K'(terminated)
prompt
prompt
------------------------------------------
prompt Step 8 Checking for Active processes under the Conflict Manager queue
prompt in fnd_concurrent_proceses table:
prompt


select a.concurrent_queue_name
, substr(b.os_process_id,0,10) "OS Proc"
, b.oracle_process_id "Oracle ID"
, b.process_status_code
from fnd_concurrent_queues a
, fnd_concurrent_processes b
where a.concurrent_queue_id=b.concurrent_queue_id
and a.concurrent_queue_name='FNDCRM'
and b.process_status_code='A'
order by b.process_status_code;

prompt If any rows found with process_status_code with value = 'A' (= Active)
prompt The internal Manager will not start up ,so to avoide this issue
prompt update these rows to have process_status_code value ='K'(terminated)
prompt
prompt
---------------------------------------------------
prompt Step 9 Checking Actual and Target Processes for Internal Manager: 

select MAX_PROCESSES,RUNNING_PROCESSES
from FND_CONCURRENT_QUEUES
where CONCURRENT_QUEUE_NAME='FNDICM';



prompt If the MAX_PROCESSES=RUNNING_PROCESSES that means the manager is UP.
prompt
prompt
--------------------------------------------------------


prompt Step 10 Checking Actual and Target Processes for the Standard Manager: 

select MAX_PROCESSES,RUNNING_PROCESSES
from FND_CONCURRENT_QUEUES
where CONCURRENT_QUEUE_NAME='STANDARD';



prompt If the MAX_PROCESSES=RUNNING_PROCESSES that means the manager is UP.
prompt
prompt
---------------------------------------------------------
prompt Step 11 Checking Actual and Target Processes for Conflict Resolution Manager:


select MAX_PROCESSES,RUNNING_PROCESSES
from FND_CONCURRENT_QUEUES
where CONCURRENT_QUEUE_NAME='FNDCRM';



prompt If the MAX_PROCESSES=RUNNING_PROCESSES that means the manager is UP.
prompt
prompt
---------------------------------------------------------


Prompt Step 12 Checking if the control_code set to 'N':
select control_code from fnd_concurrent_queues
where control_code='N'; 

prompt
prompt If any rows selected, please update the table fnd_concurrent_queues:prompt Update fnd_concurrent_queues set control_code = nullprompt where control_code ='N';PROMPT Update fnd_concurrent_queues set target_node = null;PROMPT commit;
prompt
prompt
--------------------------------


PROMPT Step 13 Checking terminated processes:PROMPT
select count (*) from fnd_concurrent_requests
where status_code='T';
prompt

prompt If you have terminated processes run the following sql statement:
prompt
prompt SQL> Update fnd_concurrent_requests
prompt set status_code = 'E', phase_code = 'C'
prompt where status_code = 'T';
prompt
------------------------------------------




prompt Step 14 Checking pending requests:
select count(*) from fnd_concurrent_requests
where status_code='P';

prompt If any rows selected please run the following sql statement:


prompt SQL> Update fnd_concurrent_requests
prompt set status_code = 'E', phase_code = 'C'
prompt where status_code = 'P';
prompt
------------------------------------------------------
prompt Step 15 Checking Running processes:prompt
select count (*) from fnd_concurrent_requests
where status_code='R';

prompt
prompt If you have Running processes run the following sql statement
prompt SQL> Update fnd_concurrent_requestsprompt set status_code = 'E', phase_code = 'C'prompt where status_code = 'R';
prompt
------------------------------------------


prompt Step 16 Checking the PMON method, which should be set to LOCK:prompt
select profile_option_id , profile_option_value
from FND_PROFILE_OPTION_VALUES
where profile_option_id= (select profile_option_id
from FND_PROFILE_OPTIONS
where profile_option_name='CONC_PMON_METHOD');

prompt
prompt If the PROFILE_OPTION_VALUE was't LOCK please
prompt Reset PMON to LOCK by running afimpmon.sql script(The manager should be down)


prompt 1-At UNIX command prompt:


prompt 2-cd $FND_TOP/sql


prompt 3-Log into SQLPLUS as apps/

prompt SQL> @afimpmon.sqlprompt prompt1:dual
prompt prompt2:LOCK (LOCK MUST BE ALL UPPERCASE)


prompt For Oracle Applications Release 11.5, when you check the PMON
prompt Method you may get no rows selected which is normal,
prompt because in apps 11.5 the Pmon Method is hard coded to Lock at
prompt the Operating System level.
prompt
prompt
-------------------------------------------------------


prompt Step-17 Checking how many FNDLIBR processes are running:prompt -For Unix :From unix command prompt $ ps -ef grep -i fndlibr
prompt If you have any FNDLIBR processes running,please kill them before
prompt starting or shuting down the internal manager
prompt
prompt
prompt -For NT, through Task Manager, check the entries under the Processes tab
for FNDLIBR.exe processes.
prompt If there are any, Highlight and click [End Process] button to kill processes


prompt
----------------------------------------------------------


prompt Step-18 Checking how many "FND_%"invalid objects:
select substr(owner,1, 12) owner, substr(object_type,1,12) type,
substr(status,1,8) status, substr(object_name, 1, 25) name
from dba_objects
where object_name like 'FND_%'
and status='INVALID';



prompt If you have any invalied objects please see note#113947.1 via Metalink


prompt
--------------------------------------------------------------


prompt Step-19-How to find the PID in the O/S for request_id:prompt If you do not like to check this enter any number then click Enter to Exit


select r.request_id, p.os_process_id
from FND_CONCURRENT_REQUESTS r,FND_CONCURRENT_PROCESSES p
where r.controlling_manager = p.concurrent_process_id
and request_id=&request_id;



prompt
prompt Please upload the "ccm.lst" output to Support, Thanks.
prompt
spool off