Friday, November 30, 2012

Why does the emCCR cron job run constantly?





    Why does the emCCR cron job run constantly?


Oracle Configuration Manager - Version: 10.2.7.0 to 10.3.1
Information in this document applies to any platform.
***Checked for relevance on 16-FEB-2012***

OSS Support Tools - Version: 3.0
Goal

Explain why the emCCR cron job runs constantly.
Solution

After OCM is installed by whatever means into an ORACLE_HOME a cron job similar to the following may be present:



  0,15,30,45 * * * * JAVA_HOME=/u00/app/oracle/product/10.2.0.3/jdk /u00/app/oracle/product/10.2.0.3/ccr/bin/emCCR -silent start



The emCCR cron job is ensuring the OCM (and thus the internal OCM scheduler) is running.  When configured, the OCM process should always be running in the background so that the collection occurs every 24 hours by default.

If the emCCR cron job detects that OCM is running, it does nothing; if OCM is not running, the cron job starts it.  The job runs every 15 minutes.





Wednesday, November 28, 2012

How to Perform a Health Check on the Database



       How to Perform a Health Check on the Database



Applies to:

Oracle Server - Enterprise Edition - Version 7.3.4.0 and later
Information in this document applies to any platform.



Purpose



This article explains how to perform a BASIC Health Check on the database verifying
several configuration issues.  General guidelines are given on what areas to investigate
to get a better overview on how the database is working and evolving. These guidelines
will reveal common issues regarding configuration as well as problems that may occur in the future.
For a more in depth health check to check Database structure and data dictionary integrity,
please follow the appropriate links in chapter 11.
The areas investigated here are mostly based on scripts and are brought to you without
any warranty, these scripts may need to be adapted for next database releases and features.
This article will probably need to be extended to serve specific application need0s/checks.
Although some performance areas are discussed in this article, it is not the intention
of this article to give a full detailed explanation of optimizing the database performance.
Scope

1. Parameter file
2. Controlfiles
3. Redolog files
4. Archiving
5. Datafiles
  5.1 Autoextend
  5.2 Location
6. Tablespaces
  6.1 SYSTEM Tablespace
  6.2 SYSAUX Tablespace (10g Release and above)
  6.3 Locally vs Dictionary Managed Tablespaces
  6.4 Temporary Tablespace
  6.5 Tablespace Fragmentation
7. Objects
  7.1 Number of Extents
  7.2 Next extent
  7.3 Indexes
8. AUTO vs MANUAL undo
  8.1 AUTO UNDO
  8.2 MANUAL UNDO
9. Memory Management
  9.1 Pre Oracle 9i
  9.2 Oracle 9i
  9.3 Oracle 10g
  9.4 Oracle 11g
10. Logging & Tracing
  10.1 Alert File
  10.2 Max_dump_file_size
  10.3 User and core dump size parameters
  10.4 Audit files
11. Advanced Health Checking

Details

1. Parameter file

The parameter file can exists in 2 forms. First of all we have the text-based version, commonly referred to as init.ora or pfile, and a binary-based file, commonly referred to as spfile. The pfile can be adjusted using a standard Operating System editor, while the spfile needs to be managed through the instance itself.

It is important to realize that the spfile takes presedence above the pfile, meaning whenever there is an spfile available this will be automatically taken unless specified otherwise.

NOTE: Getting an RDA report after making changes to the database configuration is also a recommendation. Keeping historical RDA reports will ensure you have an overview of the database configuration as the database evolves.

Reference:
Note 249664.1 Pfile vs SPfile

2. Controlfiles

It is highly recommended to have at least two copies of the controlfile. This can be done by mirroring the controlfile, strongly recommended on different physical disks. If a controlfile is lost, due to a disk crash for example, then you can use the mirrored file to startup the database. In this way fast and easy recovery
from controlfile loss is obtained.

connect as sysdba
SQL> select status, name from v$controlfile;

STATUS NAME
------- ---------------------------------
/u01/oradata/L102/control01.ctl
/u02/oradata/L102/control02.ctl

The location and the number of controlfiles can be controlled by the 'control_files' initialization parameter.

3. Redolog files

The Oracle server maintains online redo log files to minimize loss of data in the database. Redo log files are used in a situation such as instance failure to recover commited data that has not yet been written to the data files. Mirroring the redo log files, strongly recommended on different physical disks, makes recovery more easy in case one of the redo log files is lost due to a disk crash, user delete, etc.

connect as sysdba
SQL> select * from v$logfile;

GROUP# STATUS TYPE MEMBER
--------- ------- ------ -----------------------------------
1 ONLINE /u01/oradata/L102/redo01_A.log
1 ONLINE /u02/oradata/L102/redo01_B.log

2 ONLINE /u01/oradata/L102/redo02_A.log
2 ONLINE /u02/oradata/L102/redo02_B.log

3 ONLINE /u01/oradata/L102/redo03_A.log
3 ONLINE /u02/oradata/L102/redo03_B.log

At least two redo log groups are required, although it is advisable to have at least three redo log groups when archiving is enabled (see the following chapter). It is common, in environments where there are intensive log switches, to see the ARCHiver background process fall behind of the LGWR background process. In this case the LGWR process needs to wait for the ARCH process to complete archiving the redo log file.

References:
Note 102995.1 Maintenance of Online Redo Log Groups and Members

4. Archiving

Archiving provides the mechanism needed to backup the changes of the database. The archive files are essential in providing the necessary information to recover the database. It is advisable to run the database in archive log mode, although you may have reasons for not doing this, for example in case of a TEST environment where you accept to loose the changes made between the current time and the last backup.
You may ignore this chapter when the database doesn't run in archive log mode.

There are several ways of checking the archive configuration, below is one of them:

connect as sysdba
SQL> archive log list

Database log mode No Archive Mode --OR-- Archive Mode
Automatic archival Disabled --OR-- Enabled
Archive destination --OR-- USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence seq. no
Current log sequence seq. no

Pre-10g, if the database is running in archive log mode but the automatic archiver process is disabled, then you were required to manually archive the redolog files.
If this is not done in time then the database is frozen and any activity is prevented.
Therefore you should enable automatic archiving when the database is running in archive log mode. This can be done by setting the 'log_archive_start' parameter to true in the parameter file.
Starting from 10g, this parameter became obsolete and is no longer required to be set explicitly. It is important that there is enough free space on the dedicated disk(s) for the archive files, otherwise the ARCHiver process can't write and a crash is inevitable.

References:
Note 69739.1 How to Turn Archiving ON and OFF
Note 122555.1 Determine how many disk space is needed for the archive files

5. Datafiles

5.1 Autoextend

The autoextend command option enables or disables the automatic extension of data files. If the given datafile is unable to allocate the space needed, it can increase the size of the datafile to make space for objects to grow.

A standard Oracle datafile can have, at most, 4194303 Oracle datablocks.
So this also implies that the maximum size is dependant on the Oracle Block size used.

DB_BLOCK_SIZE Max Mb value to use in any command
~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2048 8191 M
4096 16383 M
8192 32767 M
16384 65535 M

Starting from Oracle 10g, we have a new functionality called BIGFILE, which allows for bigger files to be created. Please also consider that every Operating System has its limits, therefore you should make sure that the maximum size of a datafile cannot be extended past the Operating System allowed limit.

To determine if a datafile and thus, a tablespace, has AUTOEXTEND capabilities:

SQL> select file_id, tablespace_name, bytes, maxbytes, maxblocks, increment_by, file_name
from dba_data_files
where autoextensible = 'YES';

References:
Note 112011.1 ALERT: RESIZE or AUTOEXTEND can "Over-size" Datafiles and Corrupt the Dictionary
Note 262472.1 10g BIGFILE Type Tablespaces Versus SMALLFILE Type

5.2 Location

Verify the location of your datafiles. Overtime a database will grow and datafiles may be added to the database. Avoid placing datafiles on a 'wherever there is space' basis as this will complicate backup strategies and maintenance.

Below is an example of bad usage:

SQL> select * from v$dbfile;

FILE# NAME
--------- --------------------------------------------------
1 D:\DATABASE\SYS1D806.DBF
2 D:\DATABASE\D806\RBS1D806.DBF
3 D:\DATABASE\D806\TMP1D806.DBF
5 D:\DATABASE\D806\USR1D806.DBF
6 D:\USR2D806.DBF
7 F:\ORACLE\USR3D806.DBF

References:
Note 115424.1 How to Rename or Move Datafiles and Logfiles

6. Tablespaces

6.1 SYSTEM Tablespace

User objects should not be created in the system tablespace. Doing so can lead to unnecessary fragmentation and preventing system tables of growing. The following query returns a list of objects that are created in the system tablespace but not owned by SYS or SYSTEM.

SQL> select owner, segment_name, segment_type
from dba_segments
where tablespace_name = 'SYSTEM'
and owner not in ('SYS','SYSTEM');


6.2 SYSAUX Tablespace (10g Release and above)

The SYSAUX tablespace was automatically installed as an auxiliary tablespace to the SYSTEM tablespace when you created or upgraded the database. Some database components that formerly created and used separate tablespaces now occupy the SYSAUX tablespace.

If the SYSAUX tablespace becomes unavailable, core database functionality will remain operational. The database features that use the SYSAUX tablespace could fail, or function with limited capability.

The amount of data stored in this tablespace can be significant and may grow over time to unmanageble sizes if not configured properly. There are a few components that need special attention.

To check which components are occupying space:

SQL> select space_usage_kbytes, occupant_name, occupant_desc
from v$sysaux_occupants
order by 1 desc;

Reference:
Note 329984.1 Usage and Storage Management of SYSAUX tablespace occupants SM/AWR, SM/ADVISOR, SM/OPTSTAT and SM/OTHER

6.3 Locally vs Dictionary Managed Tablespaces

Locally Managed Tablespaces are available since Oracle 8i, however they became the default starting from Oracle 9i. Locally Managed Tablespaces, also referred to as LMT, have some advantage over Data Dictionary managed tablespaces.

To verify which tablespace is Locally Managed or Dictionary Managed, you can run the following query:

SQL> select tablespace_name, extent_management
from dba_tablespaces;

References:
Note 93771.1 Introduction to Locally-Managed Tablespaces
Note 105120.1Advantages of Using Locally Managed vs Dictionary Managed Tablespaces

6.4 Temporary Tablespace

* Locally Managed Tablespaces use tempfiles to serve the temporary tablespace, whereas Dictionary Managed Tablespaces use a tablespace of the type temporary. When you are running an older version (pre Oracle 9i), then it is important to check the type of tablespace used to store the temporary segments. By default, all tablespaces are created as PERMANENT, therefore you should make sure that the tablespace dedicated for temporary segments is of the type TEMPORARY.

SQL> select tablespace_name, contents
from dba_tablespaces;

TABLESPACE_NAME CONTENTS
------------------------------ ---------
SYSTEM PERMANENT
USER_DATA PERMANENT
ROLLBACK_DATA PERMANENT
TEMPORARY_DATA TEMPORARY


* Make sure that the users on the database are assigned a tablespace of the type temporary. The following query lists all the users that have a permanent tablespace specified as their default temporary tablespace.

SQL> select u.username, t.tablespace_name
from dba_users u, dba_tablespaces t
where u.temporary_tablespace = t.tablespace_name
and t.contents <> 'TEMPORARY';

Note: User SYS and SYSTEM will show the SYSTEM tablespace as there default temporary tablespace. This value can be altered as well to prevent fragmentation in the SYSTEM tablespace.

SQL> alter user SYSTEM temporary tablespace TEMP


*The space allocated in the temporary tablespace is reused. This is done for performance reasons to avoid the bottleneck of constant allocating and de-allocating of extents and segments. Therefore when looking at the free space in the temporary tablespace, this may appear as full all the time. The following are a few queries that can be used to list more meaningful information about the temporary segment usage:

This will give the size of the temporary tablespace:

SQL> select tablespace_name, sum(bytes)/1024/1024 mb
from dba_temp_files
group by tablespace_name;

This will give the "high water mark" of that temporary tablespace (= max used at one time):

SQL> select tablespace_name, sum(bytes_cached)/1024/1024 mb
from v$temp_extent_pool
group by tablespace_name;

This will give current usage:

SQL> select ss.tablespace_name,
sum((ss.used_blocks*ts.blocksize))/1024/1024 mb
from gv$sort_segment ss, sys.ts$ ts
where ss.tablespace_name = ts.name
group by ss.tablespace_name;

6.5 Tablespace Fragmentation

Heavly fragmented tablespaces can have an impact on the performance, especially when a lot of Full Table Scans are occurring on the system. Another disadvantage of fragmentation is that you can get out-of-space errors while the total sum of all free space is much more then you had requested.

The only way to resolve fragmentation is recreate the object. As of Oracle8i you can use the 'alter table .. move' command. Prior to Oracle8i you could use export/import.

If you need to defragment your system tablespace, you must rebuild the whole database since it is NOT possible to drop the system tablespace.

References:
Note 1020182.6 - SCRIPT to detect tablespace fragmentation
Note 1012431.6 - Common causes of Fragmentation
Note 147356.1 - How to Move Tables from One Tablespace to Another.

7. Objects

7.1 Number of Extents

While the performance hit on over extended objects is not significant, the aggregate effect on many over extended objects does impact performance. The following query will list all the objects that have allocated more extents than a specified minimum. Change the <--minext--> value by an actual number, in general objects allocating more then 100 a 200 extents can be recreated with larger extent sizes:

SQL> select owner, segment_type, segment_name, tablespace_name,
count(blocks), SUM(bytes/1024) "BYTES K", SUM(blocks)
from dba_extents
where owner NOT IN ('SYS','SYSTEM')
group by owner, segment_type, segment_name, tablespace_name
having count(*) > <--minext-->>
order by segment_type, segment_name;

7.2 Next extent

It is important that segments can grow and therefore allocate their next extent when needed. If there is not enough free space in the tablespace then the next extent can not be allocated and the object will fail to grow. The following query returns all the segments that are unable to allocate their next extent :

select s.owner, s.segment_name, s.segment_type,
s.tablespace_name, s.next_extent
from dba_segments s
where s.next_extent > (select MAX(f.bytes)
from dba_free_space f
where f.tablespace_name = s.tablespace_name);


Note that if there is a lot of fragmentation in the tablespace, then this query may give you objects that still are able to grow. The above query is based on the largest free chunk in the tablespace available. If there are a lot of  'small' free chunks after each other, then Oracle will coalesce these to serve the extent allocation.

Therefore it can be interesting to adapt the script in Note 1020182.6 'SCRIPT to detect tablespace fragmentation' to compare the next extent for each object with the 'contiguous' bytes (table space_temp) in the tablespace.

7.3 Indexes

The need to rebuild an index is very rare and often the coalescing the index is a better option. Please see the following article for a full explanation:

Reference:
Note 989093.1: Index Rebuild, the Need vs the Implications
Note 989186.1: Script to investigate a b-tree index structure

8. AUTO vs MANUAL undo

Starting from Oracle 9i we introduced a new way of managing the before-images. Previously this was achieved through the RollBack Segments or also referred to as manual undo. Automatic undo is used when the UNDO_MANAGEMENT parameter is set to AUTO. When not set or set to MANUAL then we use the 'old' rollback segment mechanism. Although both versions are still available in current release, automatic undo is preferred.

8.1 AUTO UNDO

There is little to no configuration involved to AUM (Automatic Undo Management). You basically define the amount of time the before image needs to be kept available. This is controlled through the parameter UNDO_RETENTION, defined in seconds. So a value of 900 indicates 15 minutes.

It is important to realize that this value is not honored when we are under space pressure in the undo tablespace.

Therefore the following formula can be used to calculate the optimal undo tablespace size:

Note 262066.1: How To Size UNDO Tablespace For Automatic Undo Management

Starting from Oracle 10g, you may choose to use the GUARANTEE option, to make sure the undo information does not get overwritten before the defined undo_retention time.

Note 311615.1: Oracle 10G new feature - Automatic Undo Retention Tuning

8.2 MANUAL UNDO

* Damaged rollback segments will prevent the instance to open the database. Only if names of rollback segments are known, corrective action can be taken. Therefore specify all the rollback segments in the 'rollback_segments' parameter in the init.ora

* Too small or not enough rollback segments can have serious impact on the behavior of your database. Therefore several issues must be taken into account. The following query will show you if there are not enough rollback segments online or if the rollback segments are too small.

SQL> select d.segment_name, d.tablespace_name, s.waits, s.shrinks,
s.wraps, s.status
from v$rollstat s, dba_rollback_segs d
where s.usn = d.segment_id
order by 1;

SEGMENT_NAME TABLESPACE_NAME WAITS SHRINKS WRAPS STATUS
--------------- ------------------ ----- --------- --------- --------
RB1 ROLLBACK_DATA 1 0 160 ONLINE
RB2 ROLLBACK_DATA 31 1 149 ONLINE
SYSTEM SYSTEM 0 0 0 ONLINE

The WAITS indicates which rollback segment headers had waits for them. Typically you would want to reduce such contention by adding rollback segments.

If SHRINKS is non zero then the OPTIMAL parameter is set for that particular rollback segment, or a DBA explicitly issued a shrink on the rollback segment.
The number of shrinks indicates the number of times a rollback segment shrinked because a transaction has extended it beyond the OPTIMAL size. If this value is too high then the value of the OPTIMAL size should be increased as well as the overall size of the rollback segment (the value of minextents can be increased or the extent size itself, this depends mostly on the indications of the WRAPS column).

The WRAPS column indicate the number of times the rollback segment wrapped to another extent to serve the transaction. If this number is significant then you need to increase the extent size of the rollback segment.

Reference:
Note 62005.1 Creating, Optimizing, and Understanding Rollback Segments

9. Memory Management

This chapter is very version driven. Depending on which version you are running the option available will be different. Overtime Oracle has invested a great deal of time and effort in managing the memory more efficiently and transparently for the end-user. Therefore it is advisable to use the automation features as much as possible.

9.1 Pre Oracle 9i

The different memory components (SGA & PGA) needed to be defined at the startup of the database. These values were static. So if one of the memory components was too low the database needed to be restarted to make the changes effective.
How to determine the optimal or best value for the different memory components is not covered in this note, since this would lead us too far. However a parameter that was often misused in these versions is the sort_area_size.

The 'sort_area_size' parameter in the init.ora defines the amount of memory that can be used for sorting. This value should be chosen carefully since this is part of the User Global Area (UGA) and therefore is allocated for each user individually.
If there are a lot of concurrent users performing large sort operation on the database then the system can run out of memory.

E.g.: You have a sort_area_size of 1Mb, with 200 concurrent users on the database.  Although this memory is allocated dynamically, it can allocate up to 200Mb and therefore can cause extensive swapping on the system.

9.2 Oracle 9i

Starting from Oracle 9i we introduced the parameters:

workarea_size_policy = [AUTO | MANUAL]
pga_aggregate_target =

This allows you define 1 pool for the PGA memory, which will be shared across sessions.
When you often receive ORA-4030 errors, then this can be an indication that this value is specified too low.

9.3 Oracle 10g

Automatic Shared Memory Management (ASMM) was introduced in 10g. The automatic shared memory
management feature is enabled by setting the SGA_TARGET parameter to a non-zero value.

This feature has the advantage that you can share memory resources among the different components.
Resources will be allocated and deallocated as needed by Oracle automatically.

Automatic PGA Memory management is still available through the 'workarea_size_policy' and
'pga_aggregate_target' parameters.

9.4 Oracle 11g

Automatic Memory Management (AMM) is being introduced in 11g. This enables automatic tuning
of PGA and SGA with use of two new parameters named MEMORY_MAX_TARGET and MEMORY_TARGET.


Reference:
Note 443746.1 Automatic Memory Management(AMM) on 11g

10. Logging & Tracing

10.1 Alert File

The alert log file of the database is written chronologically. Data is always appended and therefore this file can grow to an enormous size. It should be cleared or truncated on a regular basis, as a large alert file occupies unnecessary disk space and can slow down OS write performance to the file.


Pre-11g:

SQL> show parameter background_dump_dest

NAME TYPE VALUE
------------------------------ ------- ----------------------------------
background_dump_dest string D:\Oradata\Admin\PROD\Trace\BDump

11g and above:

SQL> show parameter diagnostic_dest

NAME TYPE VALUE
------------------------------ ------- ----------------------------------
diagnostic_dest string /oracle/admin/L111


10.2 Max_dump_file_size

Oracle Server processes generate trace files for certain errors or conflicts. These trace files are of use for further analyzing the problem. The init.ora parameter 'max_dump_file_size' limits the size of these trace files. The value of this parameter should be specified in Operating System blocks.
Make sure the disk space can handle the maximum size specified, if not then this value should be changed.

SQL> show parameter max_dump_file_size

NAME TYPE VALUE
---------------------------------- ------- ---------------------
max_dump_file_size integer 10240


10.3 User and core dump size parameters

The parameters 'user_dump_dest' and 'core_dump_dest' can contain a lot of trace information.
It is important to clear this directory at regular times as this can take up a significant amount of space.

Note: starting from Oracle 11g, this location is controlled by the 'diagnostic_dest' parameter

Reference:
Note 564989.1 How To Truncate a Background Trace File Without Bouncing the Database

10.4 Audit files

By default, every connection as SYS or SYSDBA is logged in an operating system file.
The location is controlled through the parameter 'audit_file_dest'. If this parameter is not set then the location defaults to $ORACLE_HOME/rdbms/audit.
Overtime this directory may contain a lot of auditing information and can take up a significant amount of space.

11. Advanced Health Checking

The previous chapters have been outlining the basic items to check to prevent common database cavehats. In this section you will find references to several articles explaining how a more in depth analyses and monitoring can be achieved. These article mainly focus on Data Dictionary Integrity and DataBase structure verification.

Tuesday, November 27, 2012

Oracle Database Health check scripts



Oracle Database Health check scripts
Health of the Database can be check in various ways.  It includes:

Monitoring Scope Current Status OS Level : -
1 Physical memory / Load :Load normal, Load averages: 0.35, 0.37, 0.36
2 OS Space threshold ( archive, ora_dump etc.. ) :Sufficient Space available.
3 Top 10 process consuming memory:No process using exceptional high memory
4 Free volumes available :Sufficient disk space is available on the mount points
5 Filesystem space Under normal threshold
Database level.
6 Check extents / Pro active Space addition:Space is being regularly added.
7 Check alert log for ORA- and warn messages.
8 Major wait events (latch/enqueue/Lib cache pin) No major wait events
9 Max Sessions
10 Long running Jobs 6 inactive sessions running for more than 8 hrs
11 Invalid objects 185
12 Analyze Jobs ( once in a week ) Done on 20-JAN-2008 Time 06:00:06
13 Temp usage / Rollback segment usage Normal
14 Nologging Indexes
15 Hotbackup/Coldbackup Gone fine
16 Redo generation normal
17 PQ proceses Normal
18 I/O Generation Under normal threshold
19 2 PC Pending transactions 0
DR / backup
1 Sync arch Normal
2 Purge arch Normal
3 Recovery status Normal
20)DATABASE HEALTH CHECK SCRIPT: Showing locks and Archive generation details

In Detail DATABASE Health check:
OPERATING SYSTEM:

1)Physical memory/ Load:
1) Free:free command displays amount of total, free and used physical memory (RAM) in the system as well as showing information on shared memory, buffers, cached memory and swap space used by the Linux kernel.
Usage:
$ free -m

2) vmstat:vmstat reports report virtual memory statistics, which has information about processes, swap, free, buffer and cache memory, paging space, disk IO activity, traps, interrupts, context switches and CPU activity
Usage:
$vmstat 5

3) top:top command displays dynamic real-time view of the running tasks managed by kernel and in Linux system. The memory usage stats by top command include real-time live total, used and free physical memory and swap memory with their buffers and cached memory size respectively
Usage:
$top
4) ps :ps command reports a snapshot on information of the current active processes. ps will show the percentage of memory resource that is used by each process or task running in the system. With this command, top memory hogging processes can be identified.
Usage:
$ps aux

2) OS Space threshold ( archive, ora_dump etc.. ):
Checking the OS space is available in all filesystems,specially the location which is having archive logs ,oracle Database files.We can use the below OS commands:
$df –h
$du –csh *
3) Top 10 process consuming memory:
We can Displaying top 10 memory consuming processes as follows:

ps aux|head -1;ps aux|sort -m

We can use the top command, and press M which orders the process list by memory usage.

4) Free volumes available:

We have to make sure Sufficient disk space is available on the mount points on each OS servers where the Database is up and running.

$df –h

5)Filesystem space:

Under normal threshold.Check the filesystem in the OS side whether the sufficient space is available at all mount points.

DATABASE :

6)Check extents / Pro active Space addition:
Check each of the Data,Index and temporary tablespaces for extend and blocks
Allocation details.

SET LINES 1000
SELECT SEGMENT_NAME,TABLESPACE_NAME,EXTENTS,BLOCKS
FROM DBA_SEGMENTS;

SELECT SEGMENT_NAME,TABLESPACE_NAME,EXTENTS,BLOCKS
FROM DBA_SEGMENTS WHERE TABLESPACE_NAME=’STAR01D’;

7) Check alert log for ORA- and warn messages:

Checking the alert log file regulary is a vital task we have to do.In the alert log files we have to looks for the following things:

1) Look for any of the oracle related errors.
Open the alert log file with less or more command and search for ORA-
This will give you the error details and time of occurrence.

2) Look for the Database level or Tablespace level changes
Monitor the alert log file and search the file for each Day activities happening
In the Database either whether it is bouncing of Database.Increase in the size of the tablespaces,Increase in the size of the Database parameters.In the 11g Database we can look for TNS errors in the alert log file.

8) Major wait events (latch/enqueue/Lib cache pin):

We can check the wait events details with the help of below queries:

SELECT s.saddr, s.SID, s.serial#, s.audsid, s.paddr, s.user#, s.username,
s.command, s.ownerid, s.taddr, s.lockwait, s.status, s.server,
s.schema#, s.schemaname, s.osuser, s.process, s.machine, s.terminal,
UPPER (s.program) program, s.TYPE, s.sql_address, s.sql_hash_value,
s.sql_id, s.sql_child_number, s.sql_exec_start, s.sql_exec_id,
s.prev_sql_addr, s.prev_hash_value, s.prev_sql_id,
s.prev_child_number, s.prev_exec_start, s.prev_exec_id,
s.plsql_entry_object_id, s.plsql_entry_subprogram_id,
s.plsql_object_id, s.plsql_subprogram_id, s.module, s.module_hash,
s.action, s.action_hash, s.client_info, s.fixed_table_sequence,
s.row_wait_obj#, s.row_wait_file#, s.row_wait_block#,
s.row_wait_row#, s.logon_time, s.last_call_et, s.pdml_enabled,
s.failover_type, s.failover_method, s.failed_over,
s.resource_consumer_group, s.pdml_status, s.pddl_status, s.pq_status,
s.current_queue_duration, s.client_identifier,
s.blocking_session_status, s.blocking_instance, s.blocking_session,
s.seq#, s.event#, s.event, s.p1text, s.p1, s.p1raw, s.p2text, s.p2,
s.p2raw, s.p3text, s.p3, s.p3raw, s.wait_class_id, s.wait_class#,
s.wait_class, s.wait_time, s.seconds_in_wait, s.state,
s.wait_time_micro, s.time_remaining_micro,
s.time_since_last_wait_micro, s.service_name, s.sql_trace,
s.sql_trace_waits, s.sql_trace_binds, s.sql_trace_plan_stats,
s.session_edition_id, s.creator_addr, s.creator_serial#
FROM v$session s
WHERE ( (s.username IS NOT NULL)
AND (NVL (s.osuser, 'x') <> 'SYSTEM')
AND (s.TYPE <> 'BACKGROUND') AND STATUS='ACTIVE'
)
ORDER BY "PROGRAM";

The following query provides clues about whether Oracle has been waiting for library cache activities:

Select sid, event, p1raw, seconds_in_wait, wait_time
From v$session_wait
Where event = 'library cache pin'
And state = 'WAITING';

The below Query gives details of Users sessions wait time and state:

SELECT NVL (s.username, '(oracle)') AS username, s.SID, s.serial#, sw.event,
sw.wait_time, sw.seconds_in_wait, sw.state
FROM v$session_wait sw, v$session s
WHERE s.SID = sw.SID
ORDER BY sw.seconds_in_wait DESC;

9) Max Sessions:
There should not be more than 6 inactive sessions running for more than 8 hours in a Database in order to minimize the consumption of CPU and I/O resources.

a)Users and Sessions CPU consumption can be obtained by below query:

Set lines 1000
select ss.username, se.SID,VALUE/100 cpu_usage_seconds
from v$session ss, v$sesstat se, v$statname sn
where se.STATISTIC# = sn.STATISTIC#
and NAME like '%CPU used by this session%'
and se.SID = ss.SID and ss.status='ACTIVE'
and ss.username is not null
order by VALUE desc;


b) Users and Sessions CPU and I/O consumption can be obtained by below query:

-- shows Day wise,User wise,Process id of server wise- CPU and I/O consumption
set linesize 140
col spid for a6
col program for a35 trunc
select p.spid SPID,to_char(s.LOGON_TIME,'DDMonYY HH24:MI') date_login,s.username,decode(nvl(p.background,0),1,bg.description, s.program ) program,
ss.value/100 CPU,physical_reads disk_io,(trunc(sysdate,'J')-trunc(logon_time,'J')) days,
round((ss.value/100)/(decode((trunc(sysdate,'J')-trunc(logon_time,'J')),0,1,(trunc(sysdate,'J')-trunc(logon_time,'J')))),2) cpu_per_day
from V$PROCESS p,V$SESSION s,V$SESSTAT ss,V$SESS_IO si,V$BGPROCESS bg
where s.paddr=p.addr and ss.sid=s.sid
and ss.statistic#=12 and si.sid=s.sid
and bg.paddr(+)=p.addr
and round((ss.value/100),0) > 10
order by 8;

10) Long running Jobs:

We can find out long running jobs with the help of the below query:

col username for a20
col message for a50
col remaining for 9999
select username,to_char(start_time, 'hh24:mi:ss dd/mm/yy') started,
time_remaining remaining, message
from v$session_longops
where time_remaining = 0
order by time_remaining desc;

11) Invalid objects:

We can check the invalid objects with the help of the below query:

select owner||' '||object_name||' '||created||' '||status from dba_objects where status='INVALID';

12) Analyze Jobs ( once in a week ):

We need to analyze the jobs that are running once in a week as a golden rule.
The below steps can be considered for analyzing jobs.

Analyzing a Running Job
The status of a job or a task changes several times during its life cycle. A job can have the following as its status:
Scheduled: The job is created and will run at the specified time.
Running: The job is being executed and is in progress.
Initialization Error: The job or step could not be run successfully. If a step in a job fails initialization, the job status is Initialization Error.
Failed: The job was executed but failed.
Succeeded: The job was executed completely.
Stopped: The user canceled the job.
Stop Pending: The user has stopped the job. The already running steps are completing execution.
Suspended: This indicates that the execution of the job is deferred.
Inactive: This status indicates that the target has been deleted.
Reassigned: The owner of the job has changed.
Skipped: The job was not executed at the specified time and has been omitted.
The running jobs can be found out by the help of below query:

select sid, job,instance from dba_jobs_running;

We can find out the failed jobs and Broken jobs details with the help of the Below query:

select job||' '||schema_user||' '||Broken||' '||failures||' '||what||' '||last_date||' '||last_sec from dba_jobs;

13) Temp usage / Rollback segment/PGA usage:

We can get information of temporary tablespace usage details with the help of below query:
Set lines 1000
SELECT b.tablespace,
ROUND(((b.blocks*p.value)/1024/1024),2)||'M' "SIZE",
a.sid||','||a.serial# SID_SERIAL,
a.username,
a.program
FROM sys.v_$session a,
sys.v_$sort_usage b,
sys.v_$parameter p
WHERE p.name = 'db_block_size'
AND a.saddr = b.session_addr
ORDER BY b.tablespace, b.blocks;

We can get information of Undo tablespace usage details with the help of the below query:
set lines 1000
SELECT TO_CHAR(s.sid)||','||TO_CHAR(s.serial#) sid_serial,
NVL(s.username, 'None') orauser,
s.program,
r.name undoseg,
t.used_ublk * TO_NUMBER(x.value)/1024||'K' "Undo"
FROM sys.v_$rollname r,
sys.v_$session s,
sys.v_$transaction t,
sys.v_$parameter x
WHERE s.taddr = t.addr
AND r.usn = t.xidusn(+)
AND x.name = 'db_block_size';

We can get the PGA usage details with the help of the below query:
select st.sid "SID", sn.name "TYPE",
ceil(st.value / 1024 / 1024/1024) "GB"
from v$sesstat st, v$statname sn
where st.statistic# = sn.statistic#
and sid in
(select sid from v$session where username like UPPER('&user'))
and upper(sn.name) like '%PGA%'
order by st.sid, st.value desc;
Enter value for user: STARTXNAPP
14)Validating the Backup:

We have to verify the Hotbackup/Coldbackup(or any physical or logical backup) of all the Production and non-production Databases went fine.Make sure you are having a valid backups of all the Databases.Check the Backup locations to make sure the Backup completed on time with the required Backup data.

14)Hotbackup/Coldbackup:
Validating the backup of Database.It should complete on time with the required data for restoring and recovery purpose if required.

15) Redo generation/Archive logs generation details:
We should make sure there should not be frequent log switch happening in a Database.If there are frequent log switches than archive logs might generate more which may decrease the performance of the Database however in a production Database log switches could vary depending upon the Server configuration between 5 to 20.

We can the log switch details with the help of the below query:

Redolog switch Datewise and hourwise:
-------------------------------
set lines 120;
set pages 999;
select to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
group by to_char(first_time,'DD-MON-RR')
order by 1
/
Archive logs generations is directly proportional to the number of log switches happening in a Database. If there are frequent log switches than archive logs might generate more which can affect the performance of Database.


We can use the below queries for archive logs generation details:

a)Archive logs by dates:
set lines 1000
select to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
group by to_char(first_time,'DD-MON-RR')
order by 1
/
b)Archive log generation details Day-wise :

select to_char(COMPLETION_TIME,'DD-MON-YYYY'),count(*)
from v$archived_log group by to_char(COMPLETION_TIME,'DD-MON-YYYY')
order by to_char(COMPLETION_TIME,'DD-MON-YYYY');

c) Archive log count of the day:

select count(*)
from v$archived_log
where trunc(completion_time)=trunc(sysdate);

count of archived logs generated today on hourly basis:
-------------------------------------------------------
select to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
where to_char(first_time,'DD-MON-RR')='16-AUG-10'
group by to_char(first_time,'DD-MON-RR')
order by 1
/

16)I/O Generation:
We can find out CPU and I/O generation details for all the users in the Database with the help of the below query:
-- Show IO per session,CPU in seconds, sessionIOS.
set linesize 140
col spid for a6
col program for a35 trunc
select p.spid SPID,to_char(s.LOGON_TIME,'DDMonYY HH24:MI') date_login,s.username,decode(nvl(p.background,0),1,bg.description, s.program ) program,
ss.value/100 CPU,physical_reads disk_io,(trunc(sysdate,'J')-trunc(logon_time,'J')) days,
round((ss.value/100)/(decode((trunc(sysdate,'J')-trunc(logon_time,'J')),0,1,(trunc(sysdate,'J')-trunc(logon_time,'J')))),2) cpu_per_day
from V$PROCESS p,V$SESSION s,V$SESSTAT ss,V$SESS_IO si,V$BGPROCESS bg
where s.paddr=p.addr and ss.sid=s.sid
and ss.statistic#=12 and si.sid=s.sid
and bg.paddr(+)=p.addr
and round((ss.value/100),0) > 10
order by 8;
To know what the session is doing and what kind of sql it is using:

-- what kind of sql a session is using
set lines 9999
set pages 9999

select s.sid, q.sql_text from v$sqltext q, v$session s
where q.address = s.sql_address
and s.sid = &sid order by piece;

eg: sid=1853

17)Sync arch:
In a Dataguard environment we have to check primary is in sync with the secondary Database.This we can check as follows:
The V$ MANAGED_STANDBY view on the standby database site shows you the activities performed by
both redo transport and Redo Apply processes in a Data Guard environment
SELECT PROCESS, CLIENT_PROCESS, SEQUENCE#, STATUS FROM V$MANAGED_STANDBY;
In some situations, automatic gap recovery may not take place and you will need to perform gap recovery manually. For example, you will need to perform gap recovery manually if you are using logical standby databases and the primary database is not available.
The following sections describe how to query the appropriate views to determine which log files are missing and perform manual recovery.
On a physical standby database
To determine if there is an archive gap on your physical standby database, query the V$ARCHIVE_GAP view as shown in the following example:
SQL> SELECT * FROM V$ARCHIVE_GAP;

If it displays no rows than the primary Database is in sync with the standy Database.If it display any information with row than manually we have to apply the archive logs.

After you identify the gap, issue the following SQL statement on the primary database to locate the archived redo log files on your primary database (assuming the local archive destination on the primary database is LOG_ARCHIVE_DEST_1):
Eg:
SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND SEQUENCE# BETWEEN 7 AND 10;
Copy these log files to your physical standby database and register them using the ALTER DATABASE REGISTER LOGFILE statement on your physical standby database. For example:
SQL> ALTER DATABASE REGISTER LOGFILE
'/physical_standby1/thread1_dest/arcr_1_7.arc';
SQL> ALTER DATABASE REGISTER LOGFILE
'/physical_standby1/thread1_dest/arcr_1_8.arc';

After you register these log files on the physical standby database, you can restart Redo Apply. The V$ARCHIVE_GAP fixed view on a physical standby database only returns the next gap that is currently blocking Redo Apply from continuing. After resolving the gap and starting Redo Apply, query the V$ARCHIVE_GAP fixed view again on the physical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.

On a logical standby database:
To determine if there is an archive gap, query the DBA_LOGSTDBY_LOG view on the logical standby database. For example, the following query indicates there is a gap in the sequence of archived redo log files because it displays two files for THREAD 1 on the logical standby database. (If there are no gaps, the query will show only one file for each thread.) The output shows that the highest registered file is sequence number 10, but there is a gap at the file shown as sequence number 6:
SQL> COLUMN FILE_NAME FORMAT a55
SQL> SELECT THREAD#, SEQUENCE#, FILE_NAME FROM DBA_LOGSTDBY_LOG L
2> WHERE NEXT_CHANGE# NOT IN
3> (SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L.THREAD# = THREAD#)
4> ORDER BY THREAD#,SEQUENCE#;

THREAD# SEQUENCE# FILE_NAME
---------- ---------- -----------------------------------------------
1 6 /disk1/oracle/dbs/log-1292880008_6.arc
1 10 /disk1/oracle/dbs/log-1292880008_10.arc

Copy the missing log files, with sequence numbers 7, 8, and 9, to the logical standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE statement on your logical standby database. For example:
SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/disk1/oracle/dbs/log-1292880008_10.arc';

After you register these log files on the logical standby database, you can restart SQL Apply.

The DBA_LOGSTDBY_LOG view on a logical standby database only returns the next gap that is currently blocking SQL Apply from continuing. After resolving the identified gap and starting SQL Apply, query the DBA_LOGSTDBY_LOG view again on the logical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.
Monitoring Log File Archival Information:
Step 1 Determine the current archived redo log file sequence numbers.
Enter the following query on the primary database to determine the current archived redo log file sequence numbers:
SQL> SELECT THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$LOG
WHERE STATUS='CURRENT';
Step 2 Determine the most recent archived redo log file.
Enter the following query at the primary database to determine which archived redo log file contains the most recently transmitted redo data:
SQL> SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;
Step 3 Determine the most recent archived redo log file at each destination.
Enter the following query at the primary database to determine which archived redo log file was most recently transmitted to each of the archiving destinations:
SQL> SELECT DESTINATION, STATUS, ARCHIVED_THREAD#, ARCHIVED_SEQ#
2> FROM V$ARCHIVE_DEST_STATUS
3> WHERE STATUS <> 'DEFERRED' AND STATUS <> 'INACTIVE';

DESTINATION STATUS ARCHIVED_THREAD# ARCHIVED_SEQ#
------------------ ------ ---------------- -------------
/private1/prmy/lad VALID 1 947
standby1 VALID 1 947
The most recently written archived redo log file should be the same for each archive destination listed. If it is not, a status other than VALID might identify an error encountered during the archival operation to that destination.
Step 4 Find out if archived redo log files have been received.
You can issue a query at the primary database to find out if an archived redo log file was not received at a particular site. Each destination has an ID number associated with it. You can query the DEST_ID column of the V$ARCHIVE_DEST fixed view on the primary database to identify each destination's ID number.
Assume the current local destination is 1, and one of the remote standby destination IDs is 2. To identify which log files are missing at the standby destination, issue the following query:
SQL> SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM
2> (SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1)
3> LOCAL WHERE
4> LOCAL.SEQUENCE# NOT IN
5> (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND
6> THREAD# = LOCAL.THREAD#);

THREAD# SEQUENCE#
--------- ---------
1 12
1 13
1 14

18)Purge arch:
We have to make sure the archive logs files are purged safely or move to Tape drive or any other location in order to make space for new archive logs files in the Archive logs destination locations.

19)Recovery status:
In order to do recover make sure you are having latest archive logs,so that you can restore and do the recovery if required.


20) MY DATABASE HEALTH CHECK SCRIPT:
/* SCRIPT FOR MONITORING AND CHECKING HEALTH OF DATABASE-USEFUL FOR PRODUCTION DATABASES */

-- SHOWS RUNNING JOBS
select 'RUNNING JOBS', sid, job,instance from dba_jobs_running;
set lines 1000
-- SHOWS ARCHIVE LOGS GENERAION DETAILS HOURLY AND DATE WISE BASIS
select 'ARCHIVE LOG REPORT',to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
group by to_char(first_time,'DD-MON-RR')
order by 1
/
-- WHAT ALL THE SESSIONS ARE GETTING BLOCKED
select 'SESSIONS BLOCKED',process,sid, blocking_session from v$session where blocking_session is not null;
-- WHICH SESSION IS BLOCKING WHICH SESSION
set lines 9999
set pages 9999
select s1.username || '@' || s1.machine
|| ' ( SID=' || s1.sid || ' ) is blocking '
|| s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status
from v$lock l1, v$session s1, v$lock l2, v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid
and l1.BLOCK=1 and l2.request > 0
and l1.id1 = l2.id1
and l2.id2 = l2.id2 ;
-- SHOWS BLOCK CHANGES DETAILS AND PHYSICAL READS DETAIL
select a.sid,b.username,block_gets,consistent_gets,physical_reads,block_changes
from V$SESS_IO a,V$SESSION b
where a.sid=b.sid and block_changes > 10000 order by block_changes desc;
-- show IO per session / CPU in seconds. sessionIOS.sql
set linesize 140
col spid for a6
col program for a35 trunc
select p.spid SPID,to_char(s.LOGON_TIME,'DDMonYY HH24:MI') date_login,s.username,decode(nvl(p.background,0),1,bg.description, s.program ) program,ss.value/100 CPU,physical_reads disk_io,(trunc(sysdate,'J')-trunc(logon_time,'J')) days,round((ss.value/100)/(decode((trunc(sysdate,'J')-trunc(logon_time,'J')),0,1,(trunc(sysdate,'J')-trunc(logon_time,'J')))),2) cpu_per_day
from V$PROCESS p,V$SESSION s,V$SESSTAT ss,V$SESS_IO si,V$BGPROCESS bg
where s.paddr=p.addr and ss.sid=s.sid
and ss.statistic#=12 and si.sid=s.sid
and bg.paddr(+)=p.addr
and round((ss.value/100),0) > 10
order by 8;
-- SCRIPT TO IDENTIFY LONG RUNNING STATEMENTS
rem LONGOPS.SQL
rem Long Running Statements
rem Helmut Pfau, Oracle Deutschland GmbH
set linesize 120
col opname format a20
col target format a15
col units format a10
col time_remaining format 99990 heading Remaining[s]
col bps format 9990.99 heading [Units/s]
col fertig format 90.99 heading "complete[%]"
select sid,
opname,
target,
sofar,
totalwork,
units,
(totalwork-sofar)/time_remaining bps,
time_remaining,
sofar/totalwork*100 fertig
from v$session_longops
where time_remaining > 0
/
-- ACTIVE SESSIONS IN DATABASE
select 'ACTIVE SESSION', sid, serial#,machine, osuser,username,status from v$session where username!='NULL' and status='ACTIVE';
-- WHAT SQL A SESSION IS USING
set lines 9999
set pages 9999
select s.sid, q.sql_text from v$sqltext q, v$session s
where q.address = s.sql_address
and s.sid = &sid order by piece;

eg:SID=1844
I would like to add one more script which will tell me details regarding the Size of the Database used,occupied and available and Tablespace usage
details along with hit ratio of various SGA components which can be very helpfull
to monitor the performance of the Databases.

Database_monitor.sql:

ttitle "1. :============== Tablespace Usage Information ==================:" skip 2
set linesize 140
col Total format 99999.99 heading "Total space(MB)"
col Used format 99999.99 heading "Used space(MB)"
col Free format 99999.99 heading "Free space(MB)"
break on report
compute sum of Total space(MB) on report
compute sum of Used space(MB) on report
compute sum of Free space(MB) on report
select a.tablespace_name, round(a.bytes/1024/1024,2) Total,
round( nvl( b.bytes,0)/1024/1024,2) Used,
round(nvl(c.bytes, 0)/1024/1024,2) Free ,
round(nvl(b.bytes,0)*100/nvl(a.bytes,0),2) "% Used"
from sys.sm$ts_avail a, sys.sm$ts_used b, sys.sm$ts_free c
where a.tablespace_name=b.tablespace_name(+)
and b.tablespace_name=c.tablespace_name(+);

ttitle "2. :============== Hit Ratio Information ==================:" skip 2
set linesize 80
clear columns
clear breaks
set pagesize 60 heading off termout off echo off verify off
REM
col val1 new_val lib noprint
select 100*(1-(SUM(Reloads)/SUM(Pins))) val1
from V$LIBRARYCACHE;

ttitle off
col val2 new_val dict noprint
select 100*(1-(SUM(Getmisses)/SUM(Gets))) val2
from V$ROWCACHE;

ttitle off
col val3 new_val phys_reads noprint
select Value val3
from V$SYSSTAT
where Name = 'physical reads';

ttitle off
col val4 new_val log1_reads noprint
select Value val4
from V$SYSSTAT
where Name = 'db block gets';

ttitle off
col val5 new_val log2_reads noprint
select Value val5
from V$SYSSTAT
where Name = 'consistent gets';

ttitle off
col val6 new_val chr noprint
select 100*(1-(&phys_reads / (&log1_reads + &log2_reads))) val6
from DUAL;

ttitle off
col val7 new_val avg_users_cursor noprint
col val8 new_val avg_stmts_exe noprint
select SUM(Users_Opening)/COUNT(*) val7,
SUM(Executions)/COUNT(*) val8
from V$SQLAREA;
ttitle off
set termout on
set heading off
ttitle -
center 'SGA Cache Hit Ratios' skip 2

select 'Data Block Buffer Hit Ratio : '||&chr db_hit_ratio,
' Shared SQL Pool ',
' Dictionary Hit Ratio : '||&dict dict_hit,
' Shared SQL Buffers (Library Cache) ',
' Cache Hit Ratio : '||&lib lib_hit,
' Avg. Users/Stmt : '||
&avg_users_cursor||' ',
' Avg. Executes/Stmt : '||
&avg_stmts_exe||' '
from DUAL;

ttitle "3. :============== Sort Information ==================:" skip 2

select A.Value Disk_Sorts,
B.Value Memory_Sorts,
ROUND(100*A.Value/
DECODE((A.Value+B.Value),0,1,(A.Value+B.Value)),2)
Pct_Disk_Sorts
from V$SYSSTAT A, V$SYSSTAT B
where A.Name = 'sorts (disk)'
and B.Name = 'sorts (memory)';

ttitle "4. :============== Database Size Information ==================:" skip 2


select sum(bytes/1024/1024/1024) Avail from sm$ts_avail union all select sum(bytes/1024/1024/1024) Used from sm$ts_used union all select sum(bytes/1024/1024/1024) Free from sm$ts_free;


Hope this helps you in monitoring your Databases.

How to Perform a Healthcheck on the E-biz Suite


How to Perform a Healthcheck on the E-biz Suite


A complete Oracle health check certifies every aspect of your Oracle database including disk, network, server environment and over 200 individual Oracle health metrics.


Check Web Server is Up: Check in browser http://hostname.domainname:port where hostname.domain name is Machine Name on which Web Server(OHS) is running. If you are using load balancer or Proxy Server in front of Web Server it will be loadbalancer / Proxy Server Name. Port is port on which Web Server is Listening. It is defined in $IAS_ORACLE_HOME /Apache /Apache /conf /httpd.conf. If you can’t access this check error_log at $IAS_ORACLE_HOME /Apache /Apache/ logs

Check Jserv is Up: Check if you can access OAM (Oracle Application Manager) page. http://hostname.domainname:port_number/servlets/weboam/oam/oamLogin If you can’t access login screen for OAM, check logs at

Check mod_pls is working fine( for 11i only) : http://hostname.domainname:port/pls/$SID so if you installed vision instance with SID as VIS11I your SID here is VIS11II this sometime in documents also called as TWO_TASK. You should see output from procedure FND_WEB.PING. If not check that -
A) database listener is Up
B) GUEST User is not end dated & password is correct.
C) dbc (database connect descriptor) file is corerct at $FND_TOP/secure
D) apps password is correct in wdbsvr.app at $IAS_ORACLE_HOME/Apache/modplsql/cfg

Check Aoljtest for IAS Availability :
Verify the Application Tier Node Configurations using aoljtest.jsp

(http://[hostname.domainname]:[port]/OA_HTML/jsp/fnd/aoljtest.jsp)


Login to Application: Login to application with Application User to confirm login is working fine , if not check mod_pls for any issues.

Check the apache service is up or not using
$IAS_ORACLE_HOME/Apache/Apache/logs/httpd_pls.pid

this contains the pid of the http process handling the pl/sql requests


Check Form Server is working : Click on any Core Responsibility like “System Administrator”, If you are facing any problem opening form Applet
A) Check jinitiator is installed, if not do you have access to install jinitiator
B) Try clearing cache from jinitiator
C) Check forms server parameters are configured properly in appsweb_$CONTEXT_NAME.cfg
D) Check if form server listener is Up : netstat -an | grep form_server_port
Check Concurrent Manager is Up & Running: Select “Administer Concurrent Manager” & check if “Actual” & “Target” are same. If not check $SID_MM_DD.mgr at $APPLCSF/$APPLLOG .

Check Reports can display properly : Change “Active Users” reports output from “Text file” to “PDF” from Define Concurrent Request Screen. Submit Active Users request & click on Output, check if you can se report in PDF format. If you encounter error like REP-3000 Oracle Toolkit, issue is with DISPLAY,

Check Apps Listener is Up: Apps listener is different from database listener, Apps Listener is used for various purpose one is to check output & log files in concurrent requests. Check if apps listener is fine by clicking on log file & output file for Active Users request you submitted in Step 7 on previous page. If you encounter any issues while trying to access Log & Output file issue might be with Oracle Apps Listener. Check if adalnctl.sh (where ad is apps dba, aln is apps listener and ctl is control) on application tier started successfully. If yes then check this query (select logfile_name, logfile_node_name, outfile_name, outfile_node_name from fnd_concurrent_requests where request_id=[your request id number].

Check if Workflow is Up: Login to OAM (Oracle Application Manager). After login, On top right of page from “Nevigate to” select “Workflow Manager” from drop down menu and click on GO button. Check “Notification Mailers” is Up.

Check if Discoverer 4i is Up: There are two type of discoverer 4i
Discoverer Viewer – Check http://yourhostname: port/discoverer4i/viewer
Discoverer Plus

Check http://yourhostname: port/discwb4/html/english/welcome.htm

Custom applications: Any custom application configured with oracle apps R12, OEM any other monitoring software. We should have complete details of each and every customization.

Oracle application has a health check script called RDA (check it out in oracle metalink note: 250262.1)

General Health Checks:
· Invalid Objects

· General DB

· User Load Checks

· Application Users

· Profiles

· Module Specific


R12 Health Checks: Two Frequencies

Frequencies:

1) Daily: Includes both Warnings and General Checks

Will cover these checks in more detail


2) Weekly:

Objects created in last month (DBAs should usually know why objects got created, if not research them)

Count users whose password will expire in next several weeks (look for bubbles of high password expiration)

++++++++++++++++++++++++++++++++++++++++++++++++
++ Warning Section:

++++++++++++++++++++++++++++++++++++++++++++++++

Confirm not in Maintenance Mode

User Security AND Access Issues section:

Show any logins whose password is NOT set to expire in 90 days

Non-DBAs who have System Administrator or Applic Develop Responsibility

Terminated employees who still have an active login to Oracle Apps

USER_NAME TERM_DATE

——————– ——————–

Zhann 06-Jan-09

Confirm DB not in Maintenance Mode
Profile Option = ‘Applications Maintenance Mode‘

When Profile set to ‘NORMAL’ means not in Maint Mode so application should function normally

If start Forms and Apache while in Maint Mode, users have access problems but startup looks successful

We added this Profile check to our Application Start shell script

We get a warning and application does not start

Makes obvious dbas forgot to turn Maint Mode off


Non-DBAs/Sys Admins who have System Administrator responsibility

USER_NAME END_DATE TODAY RESPONSIBILITY

——— ——– ——— ————————————–

Ajay 14-MAR-09 System Administrator


Terminated employees with active FND login account

USER_NAME TERM_DATE

——————– —————-

AMIT 06-Jan-09



Profile Warnings
Confirm Examine function is password protected

Profile = ‘Utilities: Diagnostics’

When ‘Y’ then allows Help->Diagnostics->Examine without requiring Apps password – Also protects other Help->Diagnostics actions

Cryptic Logic so helpful to explain logic of setting in SQL script


Custom Direct Changes Warnings

Standard objects you customized (not ‘extensions’)

Keep to absolute minimum – avoid like plague

Lose your changes when Oracle patches or upgrades object

Health Check to see whether object changed recently – Alerts you that you may have lost your change

Examples

Views customized for better performance or logic (e.g. ADP Views)

Package Changes: iExpense Approval – emergency customization to allow CEO to enter Expense Report

Owner Object Name Created Last DDL

—- ——————– ——— —————————-

APPS AP_WEB_DB_HR_INT_PKG 08-OCT-04 13-JAN-19



Module Specific Warnings
Receivables (AR)

Orphaned rows in ar_journal_interim_all table

Caused by AR ‘Journal Entries Report’ ending in Error

Check with Support on how to correct


Projects (PA)

Orphaned rows in pa_unbilled_rec_reporting table

Caused by ‘MGT: Unbilled Receivables Aging’ ending in Error

Check with Support on how to correct

General DB Warnings
Locked db user accounts

We have setup some DB accounts to ‘lock’ after 3 invalid password tries We only have invalid password logic on custom schemas

Expansion in Undo Tablespace

Can go unnoticed and take a lot more disk space

Tablespace Total MB Used MB Avail MB Percent Free

———— ——— ——— ——— ———–

APPS_UNDOTS1 3,518. 00 389.19 3,128.81 88.94



General Health Checks
Invalid Objects:

Apps (we currently have 4 that are always invalid)

We do not expect more than our ‘normal’ invalids (compile apps)

Custom Schema (we normally have zero invalids)


OWNER OBJECT_NAME OBJECT_TYPE CREATED

——– —————————— ———— ———

APPS MRP_AN_ONHAND_SUPPLIES_V VIEW 23-AUG-01

APPS MRP_AP_ONHAND_SUPPLIES_V VIEW 23-AUG-01

APPS MRP_AP_OPERATION_RESOURCES_V VIEW 23-AUG-01

APPS MRP_SN_OPR_RESS_T2 TRIGGER 23-AUG-01



General RDBMS Checks
Show count of failed DB logins (this does not include failed R12 logins)

Note: Popular with auditors so better take care of this


List expired Schema accounts

USERNAME CREATED DEFAULT TS TEMP STATUS LOCK_DATE

————– ——— ———- —- —— ————–

AD_MONITOR 12-NOV-05 TOOLS TEMP EXPIRE 11/12/05 12:34



User Load Checks:
Current Forms user count

Self-Service User count (in last hour since lack logoff time)

High Water Mark for Oracle Connections since last DB start

Current Total sessions

JDBC Thin Client connections – sometimes not released


TIME OF DAY Forms connects Self Service Current Sess HighWater JDBC Thin

———————– ———————— —————– ——————- ————— —————

10-JAN-06 05:45 229 132 722 994 359


User Checks
Users whose FND Login Account password has become INVALID (11.5.10.2)

Caused by too many failed login attempts


USER_NAME LASTLOGON PASS_DATE ENCRYPTED_PASSWORD

——————– ——— ——— —————-

JAMESADAMS 20-May-05 04-Apr-05 INVALID

JOHNNYGABRON 04-Jan-06 06-Jan-06 INVALID

LYNNEPAULUS 30-Dec-05 06-Jan-06 INVALID

ROBERTBOLTON 22-Dec-05 06-Jan-06 INVALID

ROBERTKRENKE 22-Dec-05 23-Sep-05 INVALID

RONPERALTA 08-Jan-06 22-Dec-05 INVALID

SKALSTABAKTON 22-Dec-05 06-Jan-06 INVALID

TRACEYMARSHALL 20-Dec-05 07-Oct-05 INVALID

USMANCHADRY 22-Dec-05 22-Dec-05 INVALID



Profile General Checks : We should check Site level Profiles changed in last 30 days ( we should have a complete detail and history of eac profile changed)

USER_PROF_NAME value USER_NAME UPDATE_DATE

—————————————- ————— ———- ———–

Applications Maintenance Mode NORMAL ANONYMOUS 08-Jan-2006

ICX: Discoverer End User Layer Language US LYNNEPAULU 07-Jan-2006

PA: Percentage Of Resource’s Capacity 0 NATEBRUNNE 05-Jan-2006

PA: Resource Utilization Calculation Met CAPACITY NATEBRUNNE 06-Jan-2006

PA: Resource Utilization Period Type GL NATEBRUNNE 06-Jan-2006

PA: Starting Assignment Status 104 NATEBRUNNE 11-Jan-2006

PA: Utilization Calculation Begin Date 01-DEC-2005 NATEBRUNNE 06-Jan-2006

PA: Utilization Records Per Set 1000 NATEBRUNNE 06-Jan-2006

WF: Mailer Cancellation Email N NATEBRUNNE 03-Jan-2006


Profile General Checks
Non-User Level Profile Settings relating to Trace or Debug

User Level Profile Settings relating to Trace or Debug


prof_id LEVEL_ID USER_NAME PROF_NAME value

——- ———- ————— ————————- —–

4176 10004 KIRKMEITZ FND: Debug Log Enabled Y

1528 10004 DOUGHAYES PA: Debug Mode Y

1528 10004 MIKEMARTIN PA: Debug Mode Y




Module Specific Checks
Vary widely from site to site

General Ledger: GL consolidation tables that may be orphaned

iProcurement: Check for stuck Web requisitions

Research ‘Purge System Saved Requisition’ Con Program

OM: Check OE Processing Msgs for Number AND types of rows

Research ‘Message Purge’ Concurrent Program to clean


R12 Collecting Configuration Information for HTTP,OC4j and OPMN (take the backup every week of these files mentioned below in case if you require to sort out any issue)

-Do the following on all middle tiers as the Application tier user


zip -r /tmp/’uname -n’-'date + %m%d%y,%H%M’_iAS_CONFIG.zip \

$ORA_CONFIG_HOME/10.1.3/Apache/Apache/conf\

$ORA_CONFIG_HOME/10.1.3/config\

$INST_TOP/pids/10.1.3/config\

$ORA_CONFIG_HOME/10.1.3/j2ee/\

$ORA_CONFIG_HOME/10.1.3/javacache/admin \

$ORA_CONFIG_HOME/10.1.3/network/admin \

$ORA_CONFIG_HOME/10.1.3/opmn

Wednesday, November 21, 2012

How to Remove an Oracle E-Business Suite Release 12.x Windows Environment


How to Remove an Oracle E-Business Suite Release 12.x Windows Environment



The procedure described in this document (Note 567507.1 on OracleMetalink) lists the steps you must take to remove an E-Business Suite R12 environment or environments from a Windows system.



There are many potential reasons for wishing to remove an environment, including an installation failure. Such a failure may result from improper install user privileges, missing prerequisite software, inadequate system resources, or other causes. Following the requirements and detailed instructions in the Oracle Applications Installation Guide, Oracle Applications Release Notes and Oracle Applications Installation and Upgrade Notes can help avoid failed installations. However, if your installation does fail, you must clean your system before you attempt to re-install Oracle Applications.



For a multi-node installation, repeat the procedure for each node on your system. The procedure requires you to use the Registry Editor to alter the Windows registry, and to delete the faulty instances from your system. You should read and understand all steps described here before you begin.



Attention: Using the Registry Editor incorrectly can cause serious problems that may require you to reinstall your operating system. It should be used only by someone who is trained as a Windows system administator.







Step 1. Back up the System



Back up your system and the Windows registry before you begin. Refer to the Microsoft document below for instructions:



Microsoft Article ID 326216: How to use the backup feature to back up and restore data in Windows Server 2003

Step 2. Alter the Windows Registry

Delete the relevant Oracle components from the Windows registry, as follows.



To delete a specific instance (in the case that there are multiple EBS instances or other Oracle software on the system):



Launch the Registry Editor.

Navigate to the HKEY_LOCAL_MACHINE\Software\ORACLE\APPLICATIONS\12.0.0 key.

Delete the entry corresponding to the instance to be deleted.

Navigate to the HKEY_LOCAL_MACHINE\System\Current Control Set\Services.

Carefully select and delete each Oracle service corresponding to the instance you wish to delete. For example, if you wish to delete "PROD", delete "OraclePROD__ToolsASControl".

When you are finished, exit the Registry Editor.

To delete all Oracle environments on the system:



Launch the Registry Editor.

Navigate to the HKEY_LOCAL_MACHINE\Software key.

Select the Apache Group subkey and delete it.

Select the Oracle subkey and delete it.

Navigate to the HKEY_LOCAL_MACHINE\System\Current Control Set\Services.

Carefully select and delete each service beginning with Oracle. For example, delete "OraclePROD__ToolsASControl".

When you are finished, exit the Registry Editor.

Step 3. Alter Environment Settings



Complete these steps to delete Oracle parameters from the environment.



Navigate to the Environment Variables editing screen according to your system setup. For example, right-click My Computer and select Properties. From the Advanced tab, click the Environment Variables button.



To delete a specific instance (in the case that there are multiple EBS instances, or there is other Oracle software on the system):



Select the following system variable if created during the failed installation, and modify it to point to the environment(s) you would like to keep: PERL5LIB.

Select the Path variable, but do not delete it. Carefully edit it and remove any component that references the failed install. For example, delete "e:\oracle\PROD\db\tech_st\10.2.0\bin;".

Save and exit System Properties.

Alternatively, to remove all Oracle environments from the system:



Select the following system variable if created during the failed installation, and delete it: PERL5LIB.

Select the Path variable, but do not delete it. Carefully edit it and remove any component with "Oracle" in the name, or that otherwise references the failed install. For example, delete "e:\oracle\PROD\db\tech_st\10.2.0\bin;".

Save and exit System Properties.

Step 4. Clean up the Program Menu



Launch the Program menu (Start->Programs). Carefully delete all Oracle programs associated with the instance or instances to be removed. Select each one, right-click it, and choose the Delete option.



Step 5. Restart your Machine



Restart your machine. This removes all Windows services that you deleted from the registry and frees up the files accessed by the associated processes so that they may also be deleted (see next step).



Step 6. Delete the Oracle Filesystem



It is now safe to remove the Oracle filesystem that was partially installed during the failed installation. Go to the base install directory, or top-level directory under which you installed the Database Server (DB) or the Applications Tier Server (APPS) directories, or both.

Note that you may choose to rename a directory before you delete it. This allows you to use the same directory name and start a new installation even before you finish the delete filesystem operation. For example, if you placed your filesystem in a "test" directory under "E:\oracle":



C:\> E:

E:\> cd oracle

E:\> rename test oldtest

E:\> del /s /q oldtest



Step 7. Delete Programs and Oracle Inventory Entries



To delete the inventory associated with a specific instance:



Start the Oracle Universal Installer using: Start >> All Programs >> Oracle-_db102_RDBMS>>Oracle Installation Products>>Universal Installer

Select Installed Products

Enable "Show empty homes"

Deinstall the entries in the global inventory associated with the Oracle Applications R12 instance which you removed earlier. For example, for the PROD instance, you would delete:

RDBMS ORACLE HOME (e.g. "PROD_DB__oracle_PROD_db_tech_st_10_2_0")

10gAS ORACLE HOME (e.g."PROD_WEBOH__oracle_PROD_apps_tech_st_10_1_3")

Developer 10g ORACLE HOME (for example, "PROD_TOOLS__oracle_PROD_apps_tech_st_10_1_2")

Exit the Universal Installer

To delete the files associated with all Oracle environments on the system:



Open a command window and navigate to the C:\Program Files\ directory.

Delete the Oracle subdirectory.

If your Oracle Inventory directory was not C:\Program Files\Oracle\Inventory and was therefore not deleted in the previous step, locate it and delete it.

How to use the backup feature to back up and restore data in Windows Server 2003


How to use the backup feature to back up and restore data in Windows Server 2003


Article ID: 326216 - View products that this article applies to.


On This Page

SUMMARY

This step-by-step article describes how to use the Backup feature to back up and restore data on your Windows Server 2003-based computer. This article is intended for users who back up and restore data, and it includes information about how to back up and restore the system configuration and local registry.



To perform the procedures in this article, you must be logged on as a member of the Administrators group or the Backup Operators group.



Backing Up the Server



You can manually back up data or use the Backup Wizard, which is included in the Backup feature. You can back up the whole contents of the server, selected portions of the server, or the system state data (the system configuration information).



To Back Up Selected Files or Folders



Click Start, point to All Programs, point to Accessories, point to System Tools, and then click Backup. The Backup or Restore Wizard starts.

Click Advanced Mode.

Click the Backup tab.

On the Job menu, click New.

Expand the drive or folder that contains the items that you want to back up. Click to select the check boxes next to the files, folders, or drives that you want to back up.

In the Backup destination box, specify the destination for the new job. To do so, do one of the following:

If you want to back up files and folders to a file, click File.

If you want to back up to tape, click a tape device.



NOTE: If a tape device is not connected to your computer, File is the only backup media type that is available in the Backup destination box.

In the Backup media or file name box, do one of the following:

If you are backing up to a file, specify a path and file name for the backup (.bkf) file. Or, click Browse, specify a file name and location where you want to save the file, and then click Save.

If you are backing up to tape, click the tape that you want to use.

On the Tools menu, click Options. Specify any additional backup options that you want on the appropriate tabs of the Options page. Click OK.

Click Start Backup.

If you want to set advanced backup options, such as data verification or hardware compressions, click Advanced. Specify the options that you want, and then click OK.

Review the settings on the Backup Job Information page. Specify whether you want this backup to replace the information that is already present on the destination media, or add this backup to the existing information.

Click Start Backup.

To Back Up the System State (Including Registry Settings)



To back up the system state (including the registry hives system, software, security, the Security Accounts Manager (SAM), and the default user (but not HKEY_CURRENT_USER)), follow these steps:

Click Start, point to All Programs, point to Accessories, point to System Tools, and then click Backup. The Backup or Restore Wizard starts.

Click Advanced Mode.

Click the Backup tab.

On the Job menu, click New.

Click to select the System State check box.

Click to select the check boxes next to any other files, folders, or drives that you want to back up.

In the Backup destination box, specify the destination for the new job. To do so, do one of the following:

If you want to back up files and folders to a file, click File.

If you want to back up to tape, click a tape device.



NOTE: If a tape device is not connected to your computer, File is the only backup media type that is available in the Backup destination box.

In the Backup media or file name box, do one of the following:

If you are backing up to a file, specify a path and file name for the backup (.bkf) file. Or, click Browse, specify a file name and location where you want to save the file, and then click Save.

If you are backing up to tape, click the tape that you want to use.

On the Tools menu, click Options. Specify any additional backup options that you want on the appropriate tabs of the Options page. Click OK.

Click Start Backup.

If you want to set advanced backup options, such as data verification or hardware compressions, click Advanced. Specify the options that you want, and then click OK.

Review the settings on the Backup Job Information page. Specify whether you want this backup to replace the information that is already present on the destination media, or add this backup to the existing information.

Click Start Backup.

To Schedule a Backup for a Later Time or Date



You may want to run a backup operation when there is low system usage. However, such times may be late at night or on weekends. You can schedule backup jobs to run on a particular day and time.



NOTE: To schedule a backup operation, the Task Scheduler service must be running.

Click Start, point to All Programs, point to Accessories, point to System Tools, and then click Backup. The Backup or Restore Wizard starts.

Click Advanced Mode.

Click the Backup tab.

On the Job menu, click New.

Expand the drive or folder that contains the items that you want to back up. Click to select the check boxes next to the files, folders, or drives that you want to back up.

In the Backup destination box, specify the destination for the new job. To do so, do one of the following:

If you want to back up files and folders to a file, click File.

If you want to back up to tape, click a tape device.



NOTE: If a tape device is not connected to your computer, File is the only backup media type that is available in the Backup destination box.

In the Backup media or file name box, do one of the following:

If you are backing up to a file, specify a path and file name for the backup (.bkf) file. Or, click Browse, specify a file name and location where you want to save the file, and then click Save.

If you are backing up to tape, click the tape that you want to use.

On the Tools menu, click Options. Specify any additional backup options that you want on the appropriate tabs of the Options page. Click OK.

Click Start Backup.

Click Schedule.



If a message prompts you to save your current backup selections, click OK. On the Save As page that appears, specify a name and location where you want to save the backup, and then click Save.

In the Job name box, type a name for the scheduled backup job, and then click Properties.

Click the Schedule tab. In the Schedule Task box, click how frequently you want the backup job to run, and then in the Start time box, specify a time when you want the backup to run, and then click OK.

On the Set Account Information page that appears, type a user name and password of the user whom you want to run the scheduled backup for, and then click OK.

Click OK.



The backup job that you scheduled appears on the calendar on the Schedule Jobs tab. The scheduled backup job automatically starts at the time and data that you specified.

Close the Backup Utility page.

To Back Up Data by Using the Backup Wizard



Click Start, point to All Programs, point to Accessories, point to System Tools, and then click Backup. The Backup or Restore Wizard starts.

Click Advanced Mode.

On the Welcome tab, click Backup Wizard (Advanced). The Backup Wizard starts. Click Next.

Specify what you want to back up, and then click Next.

If you selected Back up selected files, drives, or network data in step 4, expand the drive or folder that contains the items that you want to back up, click to select the check boxes next to the drive, folder, or file that you want to back up, and then click Next.

Specify the backup type, destination, and name in the appropriate boxes, and then click Next.



NOTE: If a tape drive is not connected to your computer, File is the only backup media type that is available in the Select the backup type box.

Review the settings that appear on the Completing the Backup Wizard page. If you want to specify advanced backup options, click Advanced, specify the options that you want, and then click OK.

Click Finish.

Restoring Data to the Server



If a data loss occurs, you can restore your backup data manually or by using the Restore Wizard, which is included in the Backup feature.



To Restore Selected Files from a File or Tape



Click Start, point to All Programs, point to Accessories, point to System Tools, and then click Backup. The Backup or Restore Wizard starts.

Click Advanced Mode.

Click the Restore and Manage Media tab.

Click the media that you want to restore, and then click to select the check boxes next to the drives, folders, or files that you want to restore.

In the Restore file to box, specify the location where you want to restore the files by doing one of the following:

If you want to restore the files or folders to the same location in which they were when you backed up the data, click Original location, and then go to step 7.

If you want to restore the files or folders to a new location, click Alternate location.



This option preserves the folder structure of the backed up data.

If you want to restore the files and folders to a single location, click Single folder.

If you selected Alternate location or Single folder, type the location in which you want the data to be restored, or click Browse and select the location, and then click OK.

On the Tools menu, click Options. Click the Restore tab, specify the restore option that you want, and then click OK.

Click Start Restore.

On the Confirm Restore page that appears, click Advanced if you want to set advanced restore options, and then click OK.

Click OK to start the restore operation.

To Restore the System State Data (Including Registry Information)



Click Start, point to All Programs, point to Accessories, point to System Tools, and then click Backup. The Backup or Restore Wizard starts.

Click Advanced Mode.

Click the Restore and Manage Media tab.

In the Items to restore box, expand the media that you want to restore, and then click to select the System State check box.

Click to select the check boxes next to any other drives, folders, or files that you want to restore.

In the Restore file to box, specify the location where you want to restore the files by doing one of the following:

If you want to restore the files or folders to the same location in which they were when you backed up the data, click Original location, and then go to step 8.

If you want to restore the files or folders to a new location, click Alternate location.



This option preserves the folder structure of the backed up data.

If you want to restore the files and folders to a single location, click Single folder.



NOTE: If you do not designate an alternate location for the restored data, the restore operation erases the current system state data and replaces it with the information that you are restoring.

If you selected Alternate location or Single folder, type the location in which you want the data to be restored, or click Browse and select the location.

Click Start Restore.

On the Confirm Restore page that appears, click Advanced if you want to set advanced restore options, and then click OK.

Click OK to start the restore operation.

To Restore Backed Up Data by Using the Restore Wizard



Click Start, point to All Programs, point to Accessories, point to System Tools, and then click Backup. The Backup or Restore Wizard starts.

Click Advanced Mode.

On the Welcome tab, click Restore Wizard (Advanced). The Restore Wizard starts. Click Next.

In the Items to restore box, expand the media that you want to restore, click to select the check boxes next to the drives, folders, or files that you want to restore, and then click Next.

Review the settings that appear on the Completing the Restore Wizard page. If you want to specify advanced backup options, click Advanced, specify the options that you want, and then click OK.

Click Finish.

Troubleshooting



You Cannot Back Up or Restore Data



You must be a member of the Administrators group or the Backup Operators group on the local computer to back up or restore data.



You Cannot Schedule a Backup Operation



The Task Scheduler service must be running before you can schedule a backup. If the Task Scheduler service is not already running, follow these steps to start it:

Click Start, and then click Run.

In the Open box, type cmd, and then click OK.

At the command prompt, type net start schedule, and then press ENTER.



Back to the top
Give Feedback

Properties

Article ID: 326216 - Last Review: December 3, 2007 - Revision: 9.6

APPLIES TO

Microsoft Windows Server 2003, Datacenter Edition (32-bit x86)

Microsoft Windows Server 2003, Enterprise Edition (32-bit x86)

Microsoft Windows Server 2003, Standard Edition (32-bit x86)

Microsoft Windows Server 2003, Web Edition

Microsoft Windows Server 2003, 64-Bit Datacenter Edition

Microsoft Windows Server 2003, Enterprise x64 Edition

Microsoft Windows Server 2003, Standard x64 Edition

Microsoft Windows Small Business Server 2003 Standard Edition

Microsoft Windows Small Business Server 2003 Premium Edition