Thursday, December 18, 2014

How to save your HFM Consolidation History


When task auditing is enabled, Hyperion Financial Management will store records of tasks in the _TASK_AUDIT table in the configured relational database.  This table grows quickly and is often purged periodically by administrators or automated processes.  In order to track consolidation history and more importantly, the duration of these consolidations, you will need to capture these records before they are purged.
One method of capturing this data is to export it prior to truncating the table via the HFM application.  This is suitable for an environment where you are unable to access the back end database tables… like most environments.
  1. Log into Workspace as an HFM application administrator.
  2. Select Administration > Task Audit
  3. Select the date range, user, and or task types.
  4. Click the Export button and save the result.
The output from the export is saved as a CSV file with the same data that was displayed on screen.  You can then save this file off someplace save and when you need to review the data, you can retrieve it.
Another method is to create a database trigger on the _TASK_AUDIT table that captures new consolidation records and inserts them into a new table.  This method requires some knowledge of database management and SQL as well as access to SQL Server database associated with the HFM application.
The first step is to create a table to store the consolidation results.
Below is the SQL I use to create the history table, which I cleverly named CONSOLIDATION_HISTORY.  The SQL also populates the table with existing consolidation records.  In this example, the database I used is called TEST_HYP_HFM_1 and the database schema is DBO.  The task audit table is titled using the HFM application name followed by “_TASK_AUDIT.”  In the SQL, I called it APPLICATION_TASK_AUDIT.  Just replace the name to match the table name in your database.  Note the SQL is written using SQL Server T-SQL.
USE [TEST_HYP_HFM_1]
GO
IF (NOT EXISTS (SELECT *
  FROM INFORMATION_SCHEMA.TABLES
  WHERE TABLE_SCHEMA = 'dbo' AND  TABLE_NAME = 'CONSOLIDATION_HISTORY'))
BEGIN
CREATE TABLE [dbo].[CONSOLIDATION_HISTORY](
              [GUID] [nvarchar](32) NOT NULL,
              [ActivityUserID] [int] NOT NULL,
              [ActivitySessionID] [int] NOT NULL,
              [ActivityCode] [int] NOT NULL,
              [ServerName] [nvarchar](256) NOT NULL,
              [AppName] [nvarchar](20) NOT NULL,
              [Start] [datetime] NULL,
              [End] [datetime] NULL,
              [Duration] [int] NULL,
              [Description] [nvarchar](1000) NULL,
              [Scenario] [nvarchar](1000) NULL,
              [Year] [nvarchar](1000) NULL,
              [StartPeriod] [nvarchar](1000) NULL,
              [EndPeriod] [nvarchar](1000) NULL,
              [Entity] [nvarchar](4000) NULL,
              [Parent] [nvarchar](4000) NULL,
        CONSTRAINT [PK_CONSOLIDATION_HISTORY] PRIMARY KEY CLUSTERED
       (
              [GUID] ASC
       )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
       ) ON [PRIMARY]
END
GO
INSERT INTO [dbo].[CONSOLIDATION_HISTORY] (
   [GUID]
   ,[ActivityUserID]
   ,[ActivitySessionID]
   ,[ActivityCode]
   ,[ServerName]
   ,[AppName]
   ,[Start]
   ,[End]
   ,[Duration]
   ,[Description]
   ,[Scenario]
   ,[Year]
   ,[StartPeriod]
   ,[EndPeriod]
   ,[Entity]
   ,[Parent])
(SELECT [strGUID]
   ,[ActivityUserID]
   ,[ActivitySessionID]
   ,[ActivityCode]
   ,[ServerName]
   ,[AppName]
   ,cast([StartTime]-2 as datetime) as [Start]
   ,cast([EndTime]-2 as datetime) as [End]
   ,datediff(s,cast([StartTime]-2 as datetime),cast([EndTime]-2 as datetime)) as [Duration]
   ,[strDescription]
   ,SUBSTRING([strDescription],CHARINDEX('The Scenario is',[strDescription])+16,(CHARINDEX(';',[strDescription],CHARINDEX('The Scenario is',[strDescription])))-(CHARINDEX('The Scenario is',[strDescription])+16)) as [Scenario]
   ,SUBSTRING([strDescription],CHARINDEX('The Year is',[strDescription])+11,(CHARINDEX(';',[strDescription],CHARINDEX('The Year is',[strDescription])))-(CHARINDEX('The Year is',[strDescription])+11)) as [Year]
   ,SUBSTRING([strDescription],CHARINDEX('The Start Period is',[strDescription])+19,(CHARINDEX(';',[strDescription],CHARINDEX('The Start Period is',[strDescription])))-(CHARINDEX('The Start Period is',[strDescription])+19)) as [StartPeriod]
   ,SUBSTRING([strDescription],CHARINDEX('The End Period is',[strDescription])+18,(CHARINDEX(';',[strDescription],CHARINDEX('The End Period is',[strDescription])))-(CHARINDEX('The End Period is',[strDescription])+18)) as [EndPeriod]
   ,CASE WHEN CHARINDEX('The Parent is',[strDescription]) > 0
       THEN SUBSTRING([strDescription],CHARINDEX('The Entity is',[strDescription])+14,(CHARINDEX(';',[strDescription],CHARINDEX('The Entity is',[strDescription])))-(CHARINDEX('The Entity is',[strDescription])+14))
       ELSE REPLACE(RIGHT([strDescription],LEN([strDescription])-CHARINDEX('The Entity is',[strDescription])-13),'.','')
   END as [Entity]
   ,CASE WHEN CHARINDEX('The Parent is',[strDescription]) > 0
       THEN REPLACE(RIGHT([strDescription],LEN([strDescription])-CHARINDEX('The Parent is',[strDescription])-13),'.','')
       ELSE Null
   END as [Parent]
  FROM [TEST_HYP_HFM_1].[dbo].[APPLICATION_TASK_AUDIT]
  where activitycode=4
  )
The last step is to create the trigger.  The SQL below was written to create a trigger on the APPLICATION_TASK_AUDIT table that will insert a record into the CONSOLIDATION_HISTORY table when a record with an ActivityCode of 4 is inserted.  ActivityCode 4 is a consolidation record.  Members from each of the dimensions described in the description are parsed out into their own field in the table so you can query them by similarity to look for trends.  The duration is stored in seconds using the DATEDIFF command.  The source data for the start and end times are stored numerically and it’s just a simple matter to convert them to a DATETIME.  For some reason, I had to subtract two from the value to return the correct date.  You can verify the result against the values returned in the Task Audit screen of the HFM application.  Some records will depict the selected Parent member in the description while others will not.  Because of this, I had to test for the string “the Parent is” in the description prior to parsing out the Entity and Parent members.
Once the SQL is modified and executed successfully, records will be inserted in the CONSOLIDATION_HISTORY table whenever consolidations are completed in the source HFM application.  Accessing the data becomes the next challenge.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TRIGGER dbo.InsertHistory
ON  dbo.APPLICATION_TASK_AUDIT
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;

INSERT INTO dbo.CONSOLIDATION_HISTORY (
[GUID]
,[ActivityUserID]
,[ActivitySessionID]
,[ActivityCode]
,[ServerName]
,[AppName]
,[Start]
,[End]
,[Duration]
,[Description]
,[Scenario]
,[Year]
,[StartPeriod]
,[EndPeriod]
,[Entity]
,[Parent])
(SELECT [strGUID]
,[ActivityUserID]
,[ActivitySessionID]
,[ActivityCode]
,[ServerName]
,[AppName]
,cast([StartTime]-2 as datetime) as [Start]
,cast([EndTime]-2 as datetime) as [End]
,datediff(s,cast([StartTime]-2 as datetime),cast([EndTime]-2 as datetime)) as [Duration]
,[strDescription]
,SUBSTRING([strDescription],CHARINDEX('The Scenario is', [strDescription])+16, (CHARINDEX(';',[strDescription],CHARINDEX('The Scenario is',[strDescription])))-(CHARINDEX('The Scenario is',[strDescription])+16)) as [Scenario]
,SUBSTRING([strDescription],CHARINDEX('The Year is',[strDescription])+11,(CHARINDEX(';',[strDescription],CHARINDEX('The Year is',[strDescription])))-(CHARINDEX('The Year is',[strDescription])+11)) as [Year]
,SUBSTRING([strDescription],CHARINDEX('The Start Period is',[strDescription])+19,(CHARINDEX(';',[strDescription],CHARINDEX('The Start Period is',[strDescription])))-(CHARINDEX('The Start Period is',[strDescription])+19)) as [StartPeriod]
,SUBSTRING([strDescription],CHARINDEX('The End Period is',[strDescription])+18,(CHARINDEX(';',[strDescription],CHARINDEX('The End Period is',[strDescription])))-(CHARINDEX('The End Period is',[strDescription])+18)) as [EndPeriod]
,CASE WHEN CHARINDEX('The Parent is',[strDescription]) > 0
THEN SUBSTRING([strDescription],CHARINDEX('The Entity is',[strDescription])+14,(CHARINDEX(';',[strDescription],CHARINDEX('The Entity is',[strDescription])))-(CHARINDEX('The Entity is',[strDescription])+14))
ELSE REPLACE(RIGHT([strDescription],LEN([strDescription])-CHARINDEX('The Entity is',[strDescription])-13),'.','')
END as [Entity]
,CASE WHEN CHARINDEX('The Parent is',[strDescription]) > 0
THEN REPLACE(RIGHT([strDescription],LEN([strDescription])-CHARINDEX('The Parent is',[strDescription])-13),'.','')
ELSE Null
END as [Parent]
FROM inserted
WHERE [ActivityCode]=4)
END

Subcubes - How HFM Stores Data




HFM organizes data into sections called "subcubes" which is a set of records that allows HFM to cache portions of the database into memory.
A subcube is made up of:
  • 1 member from Scenario, Year, Entity, and Value dimensions
  • all members of Account, ICP, View, and any Custom dimensions
  • Each Account, ICP, View, and Custom dimension combination contains 12 values which is for the 12 months of the Period dimension.
HFM then stores these subcubes in one of three tables:
  • DCE (Currency Subcube) - Stores Entity and Parent Currency values and their adjustments.
  • DCN (Parent Subcube - Stores remaining Value dimension members
  • DCT (Journal Transactions) - Stores Journal transactions. When JEs are posted, they are transferred to either DCE (for and ) or DCN (for and )

Friday, December 5, 2014

javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

avax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Are you getting this error? This simply means that the web server or the URL you are connecting to does not have a valid certificate from an authorized CA. But however, being a programmer you would want to find out the alternative way to solve this issue.
What you need to do is to import the server certificate and install it in your JDK's keystore. If I am talking greek, its ok. I too just leant this. Just follow these steps and you will be able to get rid of that error.
1. First of all you copy the URL that you are connecting to and paste it in your browser. Let us say you are using IE. Just paste the url in the address bar and press enter.
2. You will now probably see a dialog box warning you about the certificate. Now click on the 'View Certificate' and install the certificate. Ignore any warning messages.
3. Now that the server certificate is installed in your computer, your browser will not warn you when you visit the same site again. But however your JRE dumb as it is does not yet know about this certificate's existence until you add it to its keystore. Usually you will use the keytool to manage certificates. Keytool is a command-line utility with numerous arguments that allow you to create and manage keystores for housing digital certificates. For the complete documentation of keytool,http://java.sun.com/j2se/1.3/docs/tooldocs/win32/keytool.html
4. You can list the current certificates contained within a keystore using they keytool -list command. The initial password for the cacerts keystore is changeit. For example:
    C:\Program Files\Citrix\Citrix Extranet Server\SGJC\jre\bin>keytool -list -keystore ..\lib\security\cacerts
    Enter keystore password: changeit
    You will then see the something like this:
    Keystore type: jks
    Keystore provider: SUN
    Your keystore contains 11 entries:
    engweb, Wed Apr 11 16:22:49 EDT 2001, trustedCertEntry,
    Certificate fingerprint (MD5): 8C:24:DA:52:7A:4A:16:4B:8E:FB:67:44:C9:D2:E4:16
    thawtepersonalfreemailca, Fri Feb 12 15:12:16 EST 1999, trustedCertEntry,
    Certificate fingerprint (MD5): 1E:74:C3:86:3C:0C:35:C5:3E:C2:7F:EF:3C:AA:3C:D9
    thawtepersonalbasicca, Fri Feb 12 15:11:01 EST 1999, trustedCertEntry,
    Certificate fingerprint (MD5): E6:0B:D2:C9:CA:2D:88:DB:1A:71:0E:4B:78:EB:02:41
    verisignclass3ca, Mon Jun 29 13:05:51 EDT 1998, trustedCertEntry,
    Certificate fingerprint (MD5): 78:2A:02:DF:DB:2E:14:D5:A7:5F:0A:DF:B6:8E:9C:5D
    thawteserverca, Fri Feb 12 15:14:33 EST 1999, trustedCertEntry,
    Certificate fingerprint (MD5): C5:70:C4:A2:ED:53:78:0C:C8:10:53:81:64:CB:D0:1D
    thawtepersonalpremiumca, Fri Feb 12 15:13:21 EST 1999, trustedCertEntry,
    Certificate fingerprint (MD5): 3A:B2:DE:22:9A:20:93:49:F9:ED:C8:D2:8A:E7:68:0D
      verisignclass4ca, Mon Jun 29 13:06:57 EDT 1998, trustedCertEntry,
      Certificate fingerprint (MD5): 1B:D1:AD:17:8B:7F:22:13:24:F5:26:E2:5D:4E:B9:10
      verisignclass1ca, Mon Jun 29 13:06:17 EDT 1998, trustedCertEntry,
      Certificate fingerprint (MD5): 51:86:E8:1F:BC:B1:C3:71:B5:18:10:DB:5F:DC:F6:20
    verisignserverca, Mon Jun 29 13:07:34 EDT 1998, trustedCertEntry,
    Certificate fingerprint (MD5): 74:7B:82:03:43:F0:00:9E:6B:B3:EC:47:BF:85:A5:93
    thawtepremiumserverca, Fri Feb 12 15:15:26 EST 1999, trustedCertEntry,
    Certificate fingerprint (MD5): 06:9F:69:79:16:66:90:02:1B:8C:8C:A2:C3:07:6F:3A
    verisignclass2ca, Mon Jun 29 13:06:39 EDT 1998, trustedCertEntry,
    Certificate fingerprint (MD5): EC:40:7D:2B:76:52:67:05:2C:EA:F2:3A:4F:65:F0:D8

5. Now you have to add the previosly installed certificate to this keystore. To add, begin by exporting your CA Root certificate as a DER-encoded binary file and save it as C:\root.cer. (you can view the installed certificates under Tools->'Internet Options' ->Content->Certificates. Once you open the certificates, locate the one you just installed under 'Trusted Root Certification Authorities". Select the right one and click on 'export'. You can now save it (DER encoded binary) under your c: drive.
6. Then use the keytool -import command to import the file into your cacerts keystore. 
    For example:-alias myprivateroot -keystore ..\lib\security\cacerts -file c:\root.cer
Enter keystore password: changeit
Owner: CN=Division name, OU=Department, O=Your Company, L=Anytown,
ST=NC, C=US, EmailAddress=you@company.com
Issuer: CN=Division name, OU=Department, O=Your Company, L=Anytown,
ST=NC, C=US, EmailAddress=you@company.com
Serial number: 79805d77eecfadb147e84f8cc2a22106
Valid from: Wed Sep 19 14:15:10 EDT 2001 until: Mon Sep 19 14:23:20 EDT 2101
Certificate fingerprints:
MD5: B6:30:03:DC:6D:73:57:9B:F4:EE:13:16:C7:68:85:09
SHA1: B5:C3:BB:CA:34:DF:54:85:2A:E9:B2:05:E0:F7:84:1E:6E:E3:E7:68
Trust this certificate? [no]: yes
Certificate was added to keystore
7. Now run keytool -list again to verify that your private root certificate was added:
    C:\Program Files\Citrix\Citrix Extranet Server\SGJC\jre\bin>keytool -list -keystore ..\lib\security\cacerts
You will now see a list of all the certificates including the one you just added.
This confirms that your private root certificate has been added to the Extranet server cacerts keystore as a trusted certificate authority.

Thursday, December 4, 2014

Java.net.BindException: Address already in use: JVM_Bind:8080 Solution




This exception is self explanatory, its saying that a Java application is trying to connect on port 8080 but that port is already used by some other process and JVM Bind to that particular port, here its 8080, is failed. Now to fix this error you need to find out which process is listening of port 8080, we will how to find a process which is listening on a particular port in windows and then how to kill that process to make our port free to use.
Common Scenario when you see "Address already in use: JVM_Bind"
1. While doing Java remote debugging in Eclipse and when Eclipse tries to connect your remote java application on a particular port and that port is not free.
2. Starting tomcat when earlier instance of tomcat is already running and bonded to 8080 port. It will fail with SEVERE: Error initializing endpoint java.net.BindException: Address already in use: JVM_Bind:8080

Find and listening port


1. Go to run
2. Type cmd
3. Write this command ----------- 


netstat -ano | find "8080"

you will get a list of process listening to that port

Wednesday, December 3, 2014

User Managed backup User Managed Backup --- Oracle


User Managed Backup


STEP:1

1. Physical Backup

1. Cold Backup (Consistent Backup)
2. Hot Backup (Inconsistent Backup)

2. Logical Backup

1. Export / Import
2. Expdp /Impdp (available on oracle10g onwards)

Now we see the physical backup.


Cold Backup: (Consistent Backup)


A cold (or off-line) backup is a backup performed while the database is off-line and unavailable to its users. Cold backups can be taken regardless if the database is in ARCHIVELOG or NOARCHIVELOG mode.It is easier to restore from off-line backups as no recovery (from archived logs) would be required to make the database consistent. 

Why you say cold backup is consistent backup?


After shutdown the database we have taken the cold backup. During this time all datafile headers SCN are same. When we restore the cold backup, no redo is needed in case of recovery.We had taken backup datafile, logfile, controlfile, parameter file & password file.


Cold Backup Steps:

1. Get the file path information using below query

Select name from v$datafile;
select member from v$logfile;
select name from v$controlfile;


2.Taken the password file & parameter file backup ( Optional)
3.Alter system switch logfile;
4.Shutdown immediate
5.Copy all the data files /control files /log files using os command & placed in backup path.
6.Startup the database.


Hot Backup (Inconsistent backup)

A hot (or on-line) backup is a backup performed while the database is open and available for use (read and write activity). Except for Oracle exports, one can only do on-line backups when the database is ARCHIVELOG mode.

Why you say Hot backup is inconsistent backup?

While database running we have taken the hot backup. During this time backup tablespace datafile headers SCN are not same with another tablespace datafiles. When we restore the hot backup, redo is needed for recovery.

Hot Backup Steps:

1.Get the file path information using below query.
Select tablespace_name, file_name from dba_data_files order by 1;

2. Put the tablespace in begin backup mode Using os command to copy the datafiles belongs to begin backup mode tablespace & placed in backup path.
3.Once copied the datafile, release the tablespace from begin backup mode to end backup
4.Repeat the steps 1-3 for all your tablespaces.
5.Taken the controlfile backup


Alter Database backup controlfile to ‘/u01/backup/control01.ctl’; ---> à Binary format

Below one is readable format. (Used for cloning)

Alter Database backup controlfile to trace;
Alter database backup controlfile to trace as ‘/u01/backup/control01.ora’;

6.Backup all your archive log files between the previous backup and the new backup as well.
7.Taken the password file & parameter file backup ( Optional)

Example:
steps:


2.Alter tablespace system begin backup;
To ensure the begin backup mode tablespace using below query
Select * from v$backup; (refer the Change#, Time column)

3. Host cp /u01/oracle/raja/system1.DBF /u01/backup/system01.dbf à using os command to copy the datafile.
4. Alter tablespace system end backup;
To ensure the begin backup mode tablespace using below query
Select * from v$backup;


Hot Backup internal operations:

Note: While I am trying to write “during hot backup what are the internal operations going on?” I have referred several notes, but below one is very clear and nice one. 
Reference: http://knol.google.com/k/franck-pachot/oracle-begin-backupend-backup/17uabcrki6uux/3#

During backup mode, for each datafile in the tablespace, here is what happens:

1- When BEGIN BACKUP is issued:

·The hot backup flag in the datafile headers is set, so that the copy is identified to be a hot backup copy. This is to manage the backup consistency issue when the copy will be used for a recovery.
·A checkpoint is done for the tablespace, so that in case of recovery, no redo generated before that point will be applied. Begin backup command completes only when checkpoint is done.

2- During backup mode:
·The datafile header is frozen so that whenever it is copied, it reflects the checkpoint SCN that was at the beginning of the backup. Then, when the copy will be restored, Oracle knows that it needs to start recovery at that SCN to apply the archived redo logs. This is to avoid the header inconsistency issue.
That means that any further checkpoints do not update the datafile header SCN (but they do update a 'backup' SCN)
·Each first modification to a block in buffer cache will write the full block into the redo thread (in addition to the default behavior that writes only the change vector).
This is to avoid the fractured block issue. There may be a fractured block in the copy, but it will be overwritten during the recovery with the full block image.


That means that everything goes as normal except for two operations:
- at checkpoint the datafile header SCN is not updated
- when updating a block, the first time it is updated since it came in the buffer cache, the whole before image of the block is recorded in redo
- direct path writes do not go through the buffer cache, but they always write full blocks and then full block is written to redo log (if not in nologging)

3- When END BACKUP is issued:
·A record that marks the end of backup is written to the redo thread so that if the copy is restored and recovered, it cannot be recovered earlier than that point. This is to avoid the backup consistency issue.
·The hot backup flag in the datafile headers is unset.
·The header SCN is written with the current one.







STEP:2


User Managed Backup Terminology

(Operating system command are used to make backups when database is closed or open in this terminology)

Whole database backup refer to a backup of all data file, control file and log file of the database. Whole database backup can be perform when database open or closed.

The backup takes when database is closed called consistent backup. (Because database file header are consistent with the control file and when restore completely the database can be opened without any recovery.)

The backup takes when database is opened and operational called inconsistent backup. (Because database file header are not consistent with the control file.)

Important:

1. If Database is in No Archive log mode, We can not perform user managed hot backup / inconsistent backup

2. If Database is in No Archive log mode and we don’t want to change mode then we can only perform clod backup/consistent backup

3. If Database is in No Archive log mode, we can perform only Restore and not possible to recover database to the point of failure. Means, restore to the point of the last backup

4. If Database is in Archive log mode , then we can recover to the point of failure

How to know Database file location information

For Data File > select * from V$database
For Control File > select * from v$controlfile
For Log File > select * from v$logfile

How to Know all data files and there respective table space

SQL>
SELECT T.NAME TABLESPACE,F.NAME DATAFILE
FROM V$TABLESPACE T, V$DATAFILE F
WHERE T.TS# = F.TS#
ORADER BY T.NAME;

How to Make a consistent whole Database Backup

· Shutdown the database.
· Backup all data file, control file and log file by using an operating system command.
· We can also include password file and parameter file.
· Restart the oracle database/Instance.

How to make a inconsistent whole database backup

Requirement for inconstant database backup:

· The database is set to ARCHIVELOG mode.
· You ensure that the online redo logs are archived, either by enabling the Oracle automatic archiving (ARCn) process.

Making a Backup of an Online teblespace or Data file

· Set the datafile or tablespace is backup mode by issuing following command:

SQL> ALTER TABLESPACE BEGIN BACKUP;

(Note: This prevent the sequence number in the datafile header from changing.)

· Use an operating system backup utility to copy all database in the tablespace to backup storage.
Copy c:\datafile_path e:\datafilepath

· After the datafile of the tablespace have been backed up, set them into mode by issuing the following command:

· SQL> ALTER TABLESPACE END BACKUP;

· Repeat these steps for all tablespaces.

· Archive the unarchive redo logs;

SQL> ALTER SYSTEM ARCHIVE LOG CURRENT;

· Manual Control File Backups

Creating a binary image: 

ALTER DATABASE BACKUP CONTROLFILE TO 'control.bak';

Creating a taxt trace file: 

ALTER DATABASE BACKUP CONTROLFILE TO TRACE;

· Backing Up the Initilization Parameter File

CREATE PFILE FROM SPFILE;( For Default location)
CREATE PFILE ='C:\BACKUP\INIT.ORA' FROM SPFILE;

Mechanism of Open database backup

When a datafile is placed in backup mode, more redo log entries may be generated because the log writer writes block image of changes block of the datafile in backup mode to the redo log instead of just the row information

How to check Backup Status Information (When performing open database) 

select * from v$backup; (view to determine which file are in backup mode, when alter tablespace begin backup command is issued the status change to ACTIVE.)

How to verify backup

Use to ensure that a backup database or datafile is valid before a restore.

$dbv file='path of file location' start=1 logfile='enter path for log file generation'

Backup Issue with Logging and nologging Option

Tablespace, table, index may be use to set to NOLOGGING mode for Faster load of data when using direct load operation like SQL LOADER. (Because the redo logs do not contain the values that were inserted when the table was in NOLOGGING mode)
 
STEP:3

Cold Backup in Oracle (User Managed Backup)



Here are the main topics for this article:




A user-managed backup is made by performing a physical copy of data files using the OS commands. These copies are moved to a separate location using OS commands. The user maintains a record of the backups. For the recovery operation we have to move back (or to the new location of the database) the files and perform the recovery.  

The user-managed backups could be take at the following levels:
  • Data file level
  • Tablespace level
  • Database level 


  • the database should be in ARCHIVELOG mode
  • put the tablespace in "Begin Backup" mode  (example:  ALTER TABLESPACE users BEGINBACKUP;  )
  • copy the physical files associated with this tablespace on another location using OS commands
  • put the tablespace in "End Backup" mode  (example:  ALTER TABLESPACE users ENDBACKUP;  )
  • Archive the unachieved redo logs so that the redo required to recover the tablespace backups is archived                                    (  SQL> ALTER SYSTEM ARCHIVE LOG CURRENT; )
  • Take a backup of all archived redo log files generated between Begin Backup and End Backup using OS commands
NOTES: 
  • Many tablespaces could be backed up in parallel. However, online redo logs can grow large if multiple users are updating these tablespaces because the redo must contain a copy of each changed data block. Oracle doesn't recommend this.
  • When a tablespace is in backup mode, Oracle will stop updating its file headers, but will continue to write to the data files. When in backup mode, Oracle will write complete changed blocks to the redo log files. Normally only deltas (change vectors) are logged to the redo logs. This is done to enable reconstruction of a block if only half of it was backed up (split blocks). Because of this, one should notice increased log activity and archiving during on-line backups. To fix this problem, simply switch to RMAN backups.
  • If the tablespace is in READ ONLY mode, we don't need to put the tablespace in Backup Mode. 



    SELECT name FROM v$datafile;
    SELECT member FROM v$logfile;
    SELECT name FROM v$controlfile;

    To view which file correspond to which tablespace you can run:
 SELECT t.NAME "Tablespace", f.NAME "Datafile"
 FROM       V$TABLESPACE t,
                    V$DATAFILE f
 WHERE    t.TS# = f.TS#
 ORDER BY t.NAME; 


  • the database should be in ARCHIVELOG mode
  • put the datafile in OFFLINE mode  (example:  ALTER DATABASE DATAFILE'C:\oradata\file1.dbf' OFFLINE;  )
  • copy the physical file on another location using OS commands
  • put the datafile in ONLINE mode  (example:  ALTER DATABASE DATAFILE 'C:\oradata\file1.dbf'ONLINE;  )
  • backup the control file as the database has gone through structural changes 


This is done with SQL> RECOVER TABLESPACE command. It is possible to perform a tablespace recovery while the rest of the database is online. 

The prerequisites for a tablespace recovery are:
  • The tablespace must be OFFLINE (the database could be online)
  • Only COMPLETE recovery is possible
  • SYSTEM tablespace never can be recovered because is online all the time.


This is done with SQL> RECOVER DATAFILE command. It is possible to perform a data file  recovery while the rest of the database is online or offline. 

The prerequisites for a data file recovery are:
  • The data file must be OFFLINE (the database could be online)
  • SYSTEM data files never can be recovered because SYSTEM tablespace is online all the time.


If the database is down (shutdown IMMEDIATE, NORMAL) was used, we have only to copy the data files, redo log files and control files to a new location. This kind of backup is used for a database in NOARCHIVELOG which is not used for a 24x7 business. 



An Open Database Backup is a backup taken when the database is up and running. This is done by putting the tablespace in Backup mode and copying the data files and control files. All the latest archived log files must be copied as well. The V$BACKUP and V$DATAFILE_HEADER should be queried after the database backup to see if all the data files are in online mode.   



Backup the binary file:  
ALTER DATABASE BACKUP CONTROLFILE TO 'C:\backups\control1.bkp';

Generate the script to recreate the control file:
 ALTER DATABASE BACKUP CONTROLFILE TO TRACE AS 'C:\backups\control1.txt';
or
 ALTER DATABASE BACKUP CONTROLFILE TO TRACE      ( the file will be created in USER_DUMP_DEST )



An Online Backup failure can occurs if:
  • An instance failure occurs
  • An OS failure occurs
  • The database is accidentally shut down.
The database could be stopped by using shutdown abort. See the picture below:

Cold Backup in Oracle

When the database is brought up a media recovery is needed and the database will be in mount state:

Cold Backup in Oracle

In mount state we can query the files to see what is happening (optional):

Cold Backup in Oracle

In Oracle 9i and + we can use RECOVER DATABASE to do an automatic recovery (the database will be consistent again and the tablespace will not be in Backup mode). After this recovery a new backup could be taken. 

Cold Backup in Oracle


To see the status of the files during the online user-managed backups the following select could be used:
       SELECT decode(b.status, 'ACTIVE', 'BACKUP_IS_RUNNING',b.status) "Backup_Status",
                     b.time  "Backup_Start_Time",
                     fh.status "File status",
                     fh.tablespace_name "Tablespace_Name",
                     fh.name "File_Name"
       FROM v$backup b,
                  v$datafile_header fh
       WHERE b.file#=fh.file#;