Saturday, April 28, 2012

Utilizing Content Servers with Oracle Learning Management



Utilizing Content Servers with Oracle Learning Management [ID 374677.1]
  Modified 14-APR-2011     Type WHITE PAPER     Status PUBLISHED
White Paper

Checked for relevance on 08-JAN-2008
Utilizing Content Servers with Oracle Learning Management

See Change Record

Online learning content accessed through Oracle Learning Management (OLM) is stored and served by one or more content server. This document is intended for technical and functional administrators of OLM to further understand how OLM interacts with content servers.

This document describes:

What a content server is
Supported configurations for an OLM content server
How content gets transferred from the administrative user interface to the content server
The interaction between the application, the content server, and the user's PC when accessing content
Troubleshooting content server setup and file transfer issues
TOC

This white paper contains the following information.

Content Servers Described
Oracle Learning Management Content Server Supported Configurations
Key Oracle Learning Management Content Server Setup Steps
How an Oracle Learning Management Content Server Works
Content Server Troubleshooting
Content Servers Described

Overview

A content server, in simplest terms, is a computer with a web server that stores online learning content files. Learning objects that exist within OLM point to online learning content stored on a content server. When a learner accesses online learning content through the OLM player, the content server delivers the content to the learner's browser.

There are two types of content servers that can be utilized with OLM:

Oracle Learning Management Content Server
External Content Server
The two types of content servers are described in the sections below.

Oracle Learning Management Content Servers

An OLM content server provides additional functionality for administrators that is not possible with an external content server. This additional functionality enables administrators to upload online learning content to the content server through the OLM administrative user interface. Also, when learning object metadata is imported into OLM, the administrator has the capability of automatically constructing the starting URL of the learning object by selecting the content server and the directory where the starting page of the content resides. OLM then constructs the appropriate URL to which the learners are directed when launching online learning content.



Note:
An OLM content server cannot be set up and configured on any middle tier application server running the Oracle eBusiness Suite. It must be set up and configured on a separate machine outside the eBusiness Suite middle tier application servers. An OLM content server should reside in the same domain as OLM.

For example, if your eBusiness Suite middle tiers are ebiz1.mycompany.com and ebiz2.mycompany.com, the OLM content server cannot be configured on these middle tiers. You would set up a new machine, like contentserver.mycompany.com whose primary purpose would be to act as the content server for OLM.

External Content Servers

An external content server is any other content server machine that is not configured to be an OLM content server. External content servers do not provide the capability for administrators to upload content to the content server directly through the OLM administrative user interface. Typically, administrators either have direct access to the external content server machine to upload content files, or use FTP to transfer the content.

An external content server can reside anywhere, within a company's intranet or on the internet. For example, customers might access content from a content vendor hosted on the content vendor's servers. In this case, those content servers are considered external content servers. Any other machine running a web server can be utilized as an external web server.

Content Management systems such as Oracle Content Services can also be used as content servers. In this case, the content files are stored in a database rather than on the content server's file system.
Content Server Supported Configurations

Overview

This section outline the supported configurations for both OLM and External content servers.

The following table provides a summary of typical questions that arise when determining your content server strategy:

Question Response Comments
Can an Oracle eBusiness Suite middle tier be used as an OLM content server?
No
Can OLM work with multiple content servers? Yes
How many content servers can reside on one machine? One
How many external content servers can be used with OLM Unlimited
Can an OLM Content Server reside in the same domain as OLM (eBusiness Suite)? Yes
Can an OLM Content Server reside in a different domain? Yes
Can an external content server reside in the same domain? Yes
Can an external content server reside in a different domain? Yes
Is SSL Supported? No


Oracle Learning Management Content Server Configurations

The following table outlines the currently tested and supported configurations for an OLM content server.

Note:
The table below only applies to content servers that are registered in OLM as a content server, providing the capability to upload content directly to the content server through the administrative user interface.

This table is not an exhaustive list of all configurations that will work for an OLM content server, but should serve as a guideline for your content server setup. The most important factor in determining if your content server configuration will work is to ensure the content server is running Apache + Jserv. If it is not, content upload capability through the administrative user interface is not possible.

Server Version OS Comments
Standalone Apache & Jserv
Apache 1.3.33
Jserv - 1.1.2 Windows XP Apache download URL
JServ download URL

Jserv 1.1 supports Apache 1.2.x and 1.3.x.
Jserv module has been deprecated in favor of Tomcat, but Tomcat is not yet supported.
Oracle 9iAS 10g (9.0.4) Windows XP This has a built in Apache (1.3.31) and Jserv (1.1)
Oracle 9iAS 9.0.3 Windows 2000 This has a built in Apache (1.3.22) and Jserv (1.1)
Oracle Database 9.2.0.1 Solaris 2.8 This has a built in Apache (1.3.x) and Jserv (1.1)
Where Can an Oracle Learning Management Content Server Reside?

An OLM content server can reside within the same domain as OLM or in a different domain. The applications middle tier servers must be able to connect to the content server, especially if the content server resides on a different domain.

If content server is installed in an iAS installed in a separate oracle some it should be no problem to run it on the Oracle eBusiness Suite middle tier.
-> 427311.1 describes how to install a content server on a Oracle 10g iAS 10.1.3
Can an Oracle Learning Management Content Server Be Available on the Internet?

Yes, an OLM can be available over the internet. The content server can be configured in the demilitarized zone (DMZ) for this purpose. If the content is required to be accessed by external users, then the content server should be placed in the DMZ.

Internet users are redirected from the OLM server to the content server, and then the OLM server gets out of the way of the user browser and the content server.

For more information on using a DMZ, refer to Note 287176.1, DMZ Configuration with Oracle E-Business Suite 11i.

Is SSL Supported For an Oracle Learning Management Content Server?

No. SSL is not currently supported.

Key Oracle Learning Management Content Server Setup Steps

The OLM user guide (Metalink Doc ID#341440.1) provides the setup steps necessary to configure a content server for use as an OLM content server. Some key setup steps that are often overlooked or misread are included below:

Steps to Set Up the Container

(1) Install the container, such as Apache and ApacheJserv.

(2) Create a servlet repository called servlets. For example, in Jserv, the servlets
repository exists by default.

(3) Copy the following files to your servlet repository directory:
$JAVA_TOP/oracle/apps/ota/admin/common/util/ContentServerServlet.class
$JAVA_TOP/oracle/apps/ota/admin/common/util/ProtocolConstants.class
$JAVA_TOP/oracle/apps/ota/admin/common/util/SystemUtils.class
$JAVA_TOP/oracle/apps/ota/admin/common/util/SystemUtils$JarUtility.class
$JAVA_TOP/oracle/apps/ota/admin/common/util/ContentServeerClientData.class


Note:
The file location depends on the variable $JAVA_TOP.
If the application is running successfully then these files should definitely exist there

Make sure that the ENTIRE directory structure is copied over to the servlet repository directory.
So if the servlet repository directory is:

$ORACLE_HOME/Apache/Jserv/servlets

then the destination of these classes are as follows:

$ORACLE_HOME/Apache/Jserv/servlets/oracle/apps/ota/admin/common/util/ContentServerServlet.class
$ORACLE_HOME/Apache/Jserv/servlets/oracle/apps/ota/admin/common/util/ProtocolConstants.class
...
etc
(4) Also ensure the oracle.apps.fnd.common.VersionInfo.class file is located
under the class path directory that is defined for the repository.

Note:
This file should be found under $JAVA_TOP as follows:
$JAVA_TOP/oracle/apps/fnd/common/VersionInfo.class

Again, the entire directory structure needs to be copied, for example.:

$ORACLE_HOME/Apache/Jserv/servlets/oracle/apps/fnd/common/VersionInfo.class

(5) In zone.properties, add the following:

# ----- OLM Content server ----- #
servlet.OtaContentServerServlet.code=oracle.apps.ota.admin.common.util.ContentServerServlet

servlet.OtaProtocolConstants.code=oracle.apps.ota.admin.common.util.ProtocolConstants
servlet.OtaSystemUtils.code=oracle.apps.ota.admin.common.util.SystemUtils

servlet.OtaSystemUtils$JarUtility.code=oracle.apps.ota.admin.common.util.SystemUtils$JarUtility
servlet.OtaContentServerClientData.code=oracle.apps.ota.admin.common.util.ContentServerClientData

#----- END OLM
Note:
The zone.properties file location can vary depending on the implementation, usually in the apache server.
Ask the admin who set up the apache server, or look under the apache server setup.

(6) Create an alias for the directory where the content will be stored. For example, in Apache, to create an alias for the physical directory
D:/apache/rootdir/, add the following inside the httpd.conf file:

Alias /content/ ’D:/apache/rootdir/’
(7) Add the list of middle tiers and the temporary file location as arguments to the Java Interpreter. For example, in Jserv, add following lines inside the
jserv.properties file:
wrapper.bin.parameters=-Dmiddletier=148.87.19.51+148.87.19.50+144.25.78.202+10.10.20.140
wrapper.bin.parameters=-DTemp=D:\temp
wrapper.bin.parameters=-Djava.io.tmpdir=/dbfiles/applcsf/log
Here the numbers, such as 148.87.19.51 and 148.87.19.50, are IP addresses of all trusted middle tiers, and + is the separator for multiple addresses.
Servers not specified in the list are not able to access the servlet.
D:\temp is the absolute path of the temp location where files are saved temporarily while handling physical content.
/dbfiles/applcsf/log is any existing directory for log files.
Note:
The Jserv file should also be under the apache server.

How an Oracle Learning Management Content Server Works

Transferring Files to an Oracle Learning Management Content Server Through the Administrative Interface

If a content server is set up as an OLM content server, this enables administrators to upload content files directly to the content server through the administrative user interface.

The communication between the middle tier application servers and the content server happens based on a protocol stack. When a file is uploaded through the OLM administrative user interface, it first goes to the middle tier applications server, then is transferred to the content server based on that protocol stack. The content server then responds to certain commands from the middle tier and finally unzips/distributes the files in their respective places.

The session is maintained by the content server based on a counter. The session is maintained for each process, and each process is broken down to commands. For example, upload has commands like start upload, transfer, unzip, cleanup etc. The content server jserv should typically have a huge heap size since some of the file management functionality can take memory in the order of file size. The memory requirement will not be proportional to the size of each file since Input/Output occurs in the order of chunks of the file.

How Oracle Learning Management Plays Content

The following section contains excerpts from "Deploying e-Learning Content in Oracle Learning Management", metalink note ID# 308703.1. This section is relevant for both OLM content servers and external content servers.

Overview

When OLM plays content, it pulls together the content structure information from the OLM database and the media files from a content server, in the OLM player.

The OLM player enables the learner to navigate and play content. There are several options for configuring the player, but the most common option is to have a toolbar across the top of the window, an outline frame to the left, and the actual course content played in the frame to the right.

OLM generates the toolbar and outline frames from data about the content and the learner stored in the database. Note that these frames are created and reside on the LMS server, which in many cases is a different server from where the content is stored. Each launchable topic in the course outline has a content location stored in the database. When a learner selects a learning object in the outline, the corresponding content is loaded from a content server into the content frame on the right.

Technical Details

Once a learner has launched a learning object and is in the OLM player, the content server serves the content directly to the learner's browser. If the content is SCORM or AICC/HACP compliant, the content sends and receives data during the session by interacting with the application server either directly or indirectly:

For SCORM compliant content, a lightweight java applet is loaded on the learner's PC. The content on the content server interacts with the local java applet to send and receive data. In turn, the java applet communicates back with the OLM application server, which sends and receives data from the database.
For AICC/HACP compliant content, the content on the content server interacts with the OLM application server, which sends and receives data from the database.
When a learner switches learning objects or exits the course and returns to the learner home page, the communication is between the learner's browser and the application server.

Content Server Troubleshooting

Overview

This section attempts to list the common issues that arise when configuring and utilizing and OLM content server.

java.lang.OutOfMemoryError When Uploading Content

When uploading large content files to the content server, the above error is thrown.

Possible Cause and Resolution:

Verify timeout settings on iAS (web.xml) and Apache (httpd.conf) and increase if necessary
Ensure write permissions on the content server are set correctly
Ensure the jserv/servlets directory contains the appropriate class files:
oracle.apps.fnd.common.VersionInfo.class
oracle.apps.ota.admin.common.util.ContentServerClientData.class
oracle.apps.ota.admin.common.util.ContentServerServlet.class
oracle.apps.ota.admin.common.util.ProtocolConstants.class
oracle.apps.ota.admin.common.util.SystemUtils.class
oracle.apps.ota.admin.common.util.SystemUtils$JarUtility.class
Verify the value wrapper.bin.parameters in jserv.properties
Verify enough space is available on the content server
Verify proxy and firewall configurations between the browser, application server, and content server to ensure that there are no ftp or upload limit defined by your network/firewall team
Verify the KeepAlive settings in Apache on the content server machine
You can check tuning for KeepAlive, MaxKeepAliveRequests, KeepAliveTimeout settings in httpd.conf.
http://httpd.apache.org/docs/1.3/mod/core.html
Check the size of any .dat file created during upload. The temp file directory for the .dat is taken from System property "java.io.tmpdir" .
Check the speed of the network connection for mismatches. Typically this should be set as 100MB/FD, and not 100MB/HD
Authentication Failure When Uploading Content

When uploading content to the content server, the following error is thrown:

"Authentication Failure: You do not have permissions to use the content server"

Possible Cause and Resolution:

The Oracle Learning Management content server must be on a separate server outside the eBusiness Suite middle tier application servers. If the content server is on the same server as one of the eBusiness Suite middle tier servers, this is the issue.
Ensure the class files are created with the complete directory path. For example:
If c:\oracle\product\10gas\Apache\Jserv\servlets is the servlets directory, then ContentServerServlet.class goes under:
c:\oracle\product\10gas\Apache\Jserv\servlets\oracle\apps\ota\admin\common\util\ContentServerServlet.class
Verify there are no write permissions issues to the content server directory and also the applications middle tier
Ensure Jserv is configured properly. You can use the sample IsItWorking servlet in the servlet repository to ensure Jserv is set up and configured properly. If this servlet does not work, refer to iAS documentation to properly configure Jserv before continuing.
Importing SCORM Content Throws "Cannot Build Schema" Error

When importing SCORM compliant metadata and content files, an error similar to the following is thrown:

"Can not build schema 'http://www.imsproject.org/xsd/imscp_rootv1p1p2' located at 'imscp_rootv1p1p2.xsd'"

Resolution:

Replace the XSD files in the zip file with the XSD files available from metalink Note ID #221102.1
Note:
This issue is bug #4173535



Change Record

Date Description of Change
02-MAY-2006
Created document.
04-MAY-2006 Modified based on reviews
03-JUN-2006 Modified based on reviews
08-JUN-2006 Modified based on reviews
Oracle Corporation

Author and Date
Scott Morris 08-JUN-2006

Copyright Information
Copyright � 2004, 2005, 2006 Oracle. All rights reserved.

Disclaimer
This document in any form, software or printed matter, contains proprietary information that is the exclusive property of Oracle. Your access to and use of this confidential material is subject to the terms and conditions of your Oracle Software License and Service Agreement, which has been executed and with which you agree to comply. This document and information contained herein may not be disclosed, copied, reproduced or distributed to anyone outside Oracle without prior written consent of Oracle. This document is not part of your license agreement nor can it be incorporated into any contractual agreement with Oracle or its subsidiaries or affiliates.

This document is for informational purposes only and is intended solely to assist you in planning for the implementation and upgrade of the product features described. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described in this document remains at the sole discretion of Oracle.

Due to the nature of the product architecture, it may not be possible to safely include all features described in this document without risking significant destabilization of the code.

Trademark Information
Oracle, JD Edwards, PeopleSoft, and Siebel are registered trademarks of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.


 Related
Products
Oracle E-Business Suite > Human Capital Management > Human Resources > Oracle Learning Management

Back to top

 Rate this document

Tuesday, April 24, 2012

IOR file '/var/tmp/gconfd-root/lock/ior' not opened successfully


IOR file '/var/tmp/gconfd-root/lock/ior' not opened successfully, no gconfd located
While login to the Solaris as root or any other user it’s showing the errors like below:

GConf Error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. Seehttp://www.gnome.org/projects/gconf/ for information. (Details - 1: IOR file '/var/tmp/gconfd-root/lock/ior' not opened successfully, no gconfd located: No such file or directory 2: IOR file '/var/tmp/gconfd-root/lock/ior' not opened successfully, no gconfd located: No such file or directory)

I had checked the /var/tmp/gconfd-root directory permissions
it has 777 permissions for that directory
while i checked log under /var/log it says: 777 is bad option for gconfd-root
I had changed that permissions to 700 and all seems ok now.

Thursday, April 12, 2012

Export/Import Process for Oracle E-Business Suite Release 12 using 10gR2



Export/Import Process for Oracle E-Business Suite Release 12 using 10gR2 [ID 454616.1]

 Modified 06-FEB-2012     Type WHITE PAPER     Status PUBLISHED 

Export/Import Process for Oracle E-Business Suite Release 12
Database Instances Using Oracle Database 10g Release 2

February 2012


This document describes the process of re-creating an existing Applications Release 12 database instance using the datapump utilities. The most current version of these notes is document 454616.1 on My Oracle Support. There is a change log at the end of this document.
The datapump utilities allow you to move existing data in Oracle format to and from Oracle databases. For example, datapump export files can archive database data, or move data among different Oracle databases that run on the same or different operating systems. This document assumes that you are already familiar with datapump.
There are special considerations when exporting or importing an Applications Release 12 database instance. This process consists of five discrete steps. Each step is covered in a separate section in this document.
The source (export from) and target (import to) ORACLE_HOME directories must be Oracle Database 10g Release 2 (10.2.0).
The export/import process requires the use of both the datapump utilities (expdp/impdp) and the traditional export/import (exp/imp). For more information, read Oracle Database Utilities 10g Release 2 (10.2).
Attention: This document uses UNIX/Linux syntax when describing directory structures. However, it applies to Windows servers as well. Where there is a significant difference in tasks for Windows, specific instructions are given.
Some of the tasks in this document affect the APPL_TOP of one or more application server tiers. Those tasks require that the Applications file system environment be enabled by running the APPSORA.env file (for UNIX or Linux) or the envshell.cmd file (for Windows) prior to performing the tasks. Other tasks affect the Applications database instance. Those tasks require that the Oracle 10g environment be enabled by running the [ORACLE_SID].env/cmd file under the Oracle 10g Oracle home on the database server node prior to performing the tasks. In addition, you may have more than one Oracle home installed on the database server node, so it is important that you run the correct [ORACLE_SID].env/cmd file before performing tasks that affect the database instance. Read the instructions carefully to determine which environment should be enabled for each step.
Attention: This document assumes that the source and target application server tiers are the same. Creating new application server tiers for the target environment has to be done either before starting or after completing all the steps in this document. Then, update and run AutoConfig for the source database and application server tiers to enable the source environment.
Attention: If you are using Oracle Database Vault, refer to Note 822048.1 before performing any step in this document.

Section 1: Prepare the source system

This section describes how to ensure that you have the required patches, create your export file, and capture important information that is required to import your database.
  1. Apply latest AutoConfig patches
    Perform steps 3.1 and 3.2.1 of the Using AutoConfig to Manage System Configurations in Oracle Applications Release 12 document. The other steps in 3.2 are not necessary as they will be done at the target side.
  2. Apply the Applications consolidated export/import utility patch
    Apply patch 13023290 to the source administration server node. This patch provides several SQL scripts that facilitate exporting and importing an Applications database instance, export and import parameter files, and a perl script, which creates an AD patch driver.
  3. Apply latest Applications database preparation scripts patch (conditional)
    If you are using Oracle E-Business Suite Release 12.0, apply patch 6342289 to every application tier server node in the source system.
  4. Create a working directory
    Create a working directory named expimp in the source system that will contain all generated files and scripts required to complete this section. As an example,
    $ mkdir /u01/expimp
    
  5. Generate target database instance creation script aucrdb.sql
    The target database instance must be created with the same tablespace and file structure as the source database instance. The export/import patch provides the auclondb.sql script which generates the aucrdb.sql script, which you use to create the target database instance with the appropriate tablespace and file structure. The script converts all tablespaces except for SYSTEM to locally managed tablespaces with auto segment space management, if they are not already so.
    On the source administration server node, use SQL*Plus to connect to the database as SYSTEM and run the $AU_TOP/patch/115/sql/auclondb.sql script. It creates aucrdb.sql in the current directory.

    $ sqlplus system/[system password] \
        @$AU_TOP/patch/115/sql/auclondb.sql 10
    
  6. Record Advanced Queue settings
    Advanced Queue settings are not propagated in the target database instance during the export/import process. Therefore, you must record them beforehand and enable them in the target database instance afterwards. The export/import patch contains auque1.sql, which generates a script called auque2.sql. You can use auque2.sql to enable the settings in the target database instance.
    Copy the auque1.sql script from the $AU_TOP/patch/115/sql directory on the source administration server node to the working directory in the source database server node. Then, on the source database server node, as the owner of the source database server file system and database instance, use SQL*Plus to connect to the source database as sysdba and run the auque1.sql script. It generates auque2.sql.

    $ sqlplus /nolog
    SQL> connect / as sysdba;
    SQL> @auque1.sql
    
  7. Create parameter file for tables with long columns (conditional)
    The fix to this issue is part of 10.2.0.5. If you are on 10.2.0.4 or prior versions of 10g Release 2, tables with long columns may not propagate properly in datapump. Therefore, they have to be migrated separately using the traditional export/import utilities.
    Copy the aulong.sql script from the $AU_TOP/patch/115/sql directory on the source administration server node to the working directory in the source database server node. Then, on the source database server node, as the owner of the source database server file system and database instance, use SQL*Plus to connect to the source database as sysdba and run the aulong.sql script. It generates aulongexp.dat.
    $ sqlplus /nolog
    SQL> connect system/[system password];
    SQL> @aulong.sql
    
  8. Remove rebuild index parameter in spatial indexes
    Ensure that you do not have the rebuild index parameter in the spatial indexes. To see if you have any rebuild index parameters, on the source database server node, as the owner of the source database server file system and database instance, use SQL*Plus to connect to the source database as sysdba and run the following command:
    SQL> select * from dba_indexes where index_type='DOMAIN' and
      upper(parameters) like '%REBUILD%';
    
    To remove the rebuild index parameter, use SQL*Plus to connect to the source database as the owner of the index and run the following command:
    SQL> alter index [index name] rebuild parameters [parameters]
    
    where [parameters] is the original parameter set without the rebuild_index parameter.
  9. Synchronize Text indexes
    Unsynchronized Oracle Text indexes slow down the export process. Ensure that the indexes are synchronized before running the export. Use SQL*Plus to connect to the source database as SYSDBA and run the following command to find all indexes pending synchronization:
    $ sqlplus '/ as sysdba'
    SQL> select pnd_index_owner,pnd_index_name,count(*) \
      from ctxsys.ctx_pending \
      group by pnd_index_owner,pnd_index_name;
    
    To synchronize the indexes, run the following command:
    SQL> exec ctx_ddl.sync_index('[index owner].[index name]');
    

Section 2: Prepare a target Release 12 database instance

This section describes how to create the empty target database and populate it with all of the required system objects prior to running import.
The Oracle home of the target database instance can be the same Oracle home that the source database instance uses, or it can be different (on another machine running a different operating system, for example), as long as it uses Oracle Database 10g Release 2 Enterprise Edition.
  1. Create target Oracle 10g Oracle home (conditional)
    If you want the target Oracle 10g Oracle home to be separate from the source Oracle home, you must create it now. Decide whether you want to install the 10.2.0 Oracle home manually, or use the Rapid Install to create it for you.
    If you choose to use Rapid Install, you must use Rapid Install Release 12.0.0. As the owner of the Oracle RDBMS file system, start the Rapid Install wizard by typing:
    $ rapidwiz -techstack
    
    Choose the "10gR2 RDBMS" option in the techstack components window and provide the details for the new Oracle home. Make sure that the SID environment setting is set to the same value as your existing database instance.
    If you choose to manually install the 10.2.0 Oracle home, log in to the database server node as the owner of the Oracle RDBMS file system and database instance and perform the following steps:
    1. Ensure that environment settings, such as ORACLE_HOME, are set for the new Oracle home you are about to create.
    2. Perform all the steps in Chapter 3 of the Oracle Database Installation Guide 10g Release 2 (10.2), for your platform.
    3. In the subsequent windows, click on the Product Languages button and select any languages other than American English that are used by your Applications database instance, choose the Enterprise Edition installation type, and select the options not to upgrade an existing database and to install the database software only.
    4. Perform tasks in section 3.5, "Installing Oracle Database 10g Products" in the Oracle Database Companion CD Installation Guide for your platform. Do not perform the tasks in the "Preparing Oracle Workflow Server for the Oracle Workflow Middle Tier Installation" section.
    5. In the Installation Types window, click on the Product Languages button to select any languages other than American English that are used by your Applications database instance.
    6. Make sure that the following environment variables are set whenever you enable the 10g Oracle home:
      • ORACLE_HOME points to the new 10.2.0 Oracle home.
      • PATH includes $ORACLE_HOME/bin and the directory where the new perl executable is located (usually $ORACLE_HOME/perl/bin).
      • LD_LIBRARY_PATH includes $ORACLE_HOME/lib.
      • PERL5LIB points to the directories where the new perl libraries are located (usually $ORACLE_HOME/perl/lib/[perl version] and $ORACLE_HOME/perl/lib/site_perl/[perl version])
    7. Run the $ORACLE_HOME/nls/data/old/cr9idata.pl script to create the $ORACLE_HOME/nls/data/9idata directory. After creating the directory, make sure that the ORA_NLS10 environment variable is set to the full path of the 9idata directory whenever you enable the 10g Oracle home.

      Attention: Check to make sure the $ORACLE_HOME/nls/data/9idata directory is created and is non-empty.

    Attention (for Windows users): Keep track of the database home name used. For Rapidwiz installed Oracle homes, the home name is [SID]_db102_RDBMS. For manually installed Oracle homes, the home name is what you input when creating the Oracle home.
  2. Upgrade to the latest 10.2.0 patch set (conditional)
    If you are not on the latest patch set, perform the following steps from the Oracle E-Business Suite Release 12 with Oracle Database 10g Release 2 (10.2.0) Interoperability Notes on My Oracle Support:
    1. Perform 10.2.0.x Patch Set installation tasks
    2. Apply additional 10.2.0.x RDBMS patches
    Do not perform any post-installation patch README steps.
  3. Modify sqlnet.ora file (Windows only)
    If the target database server node is running Windows, add the following line to the sqlnet.ora file in the %ORACLE_HOME%\network\admin\[SID] directory, if it does not already exist:
    SQLNET.AUTHENTICATION_SERVICES=(NTS)
    
  4. Create the target initialization parameter file and CBO parameter file
    The initialization parameter file (init[SID].ora) and cost-based optimizer (CBO) parameter file (ifilecbo.ora) are located in the $ORACLE_HOME/dbs directory on the source database server node. Copy both files to the Oracle 10g $ORACLE_HOME/dbs directory on the target database server node.
    Refer to Database Initialization Parameters for Oracle Applications Release 12 and update both the init.ora and ifilecbo.ora files with any necessary changes. You may also need to update initialization parameters involving the db_name, control_files, and directory structures.
    Ensure that the undo_tablespace parameter in the initialization parameter file of the target database instance matches with the default undo tablespace set in the aucrdb.sql script.
    Ignore the initialization parameters that pertain to the native compilation of PL/SQL code. You will be instructed to add them later, if necessary.
  5. Create a working directory
    Create a working directory named expimp in the target system that will contain all generated files and scripts required to complete this section. As an example,
    $ mkdir /u01/expimp
    
  6. Create the target database instance
    Copy the aucrdb.sql script, generated in Section 1, from the source administration server node to the working directory in the target database server node. Then update the script on the target database server node with any necessary changes to the directory structures for the log file(s), data file(s), or tablespaces, reflecting the layout of the target database server node. If the target database server node is running Windows, update the directory structure from UNIX/Linux format to Windows format or vice versa.

    Attention: Using the source tablespace information does not guarantee that the target tablespaces will be enough. It is highly recommended that you go through the source dba_free_space table to see which of the tablespaces are running out and modify the aucrdb.sql script to ensure ample tablespace size on the target database.
    Make sure that the environment of your session on the target database server node is set up properly for the target database instance, especially the ORACLE_HOME, ORACLE_SID, and ORA_NLS10 environment settings. (ORACLE_SID must be set to the same value as the db_name parameter in the init[SID].ora file.) Then, use the following commands to run aucrdb.sql and create the target database instance:
    $ sqlplus /nolog
    SQL> connect / as sysdba;
    SQL> spool aucrdb.log;
    
    For UNIX or Linux:
    SQL> startup nomount; 
    SQL> @aucrdb.sql
    SQL> exit;
    
    For Windows:
    SQL> startup nomount pfile=%ORACLE_HOME%\dbs\init%ORACLE_SID%.ora
    SQL> @aucrdb.sql
    SQL> exit;
    
    If PL/SQL of the source database was natively compiled, see the "Compiling PL/SQL Code for Native Execution" section of Chapter 11 of Oracle Database PL/SQL User's Guide and Reference 10g Release 2 (10.2) for instructions on how to natively compile PL/SQL in the target database. Add the parameters that pertain to the native compilation where specified. Do not use the natively compiled code generated by the source database. Oracle does not support switching the PL/SQL compilation mode from interpreted to native (and vice-versa) for an export/import. Exporting/importing using native mode takes significantly more time than interpreted mode.
    When the target database instance has been created, restart the database instance.
  7. Copy database preparation scripts to target Oracle home
    The database preparation scripts that you applied to the source administration server node in Section 1 contain four scripts that are needed on the target database server node. Copy the following files from the $APPL_TOP/admin directory of the source administration server node to the working directory in the target database server node: addb1020.sql, adsy1020.sql, adjv1020.sql, and admsc1020.sql (UNIX or Linux) or addb1020_nt.sql, adsy1020_nt.sql, adjv1020_nt.sql, and admsc1020_nt.sql (Windows).
    As you run each of the next four steps, note the following:
    1. The remarks section at the beginning of each script contains additional information.
    2. Each script creates a log file in the current directory.
  8. Set up the SYS schema
    The addb1020.sql or addb1020_nt.sql script sets up the SYS schema for use with the Applications. On the target database server node, use SQL*Plus to connect to the target database instance as SYSDBA and run addb1020.sql (UNIX/Linux) or addb1020_nt.sql (Windows).
    Here is an example on UNIX or Linux:

    $ sqlplus "/ as sysdba" @addb1020.sql
    
  9. Set up the SYSTEM schema
    The adsy1020.sql or adsy1020_nt.sql script sets up the SYSTEM schema for use with the Applications. On the target database server node, use SQL*Plus to connect to the target database instance as SYSTEM and run adsy1020.sql (UNIX/Linux) or adsy1020_nt.sql (Windows).
    Here is an example on UNIX or Linux:

    $ sqlplus system/[system password] @adsy1020.sql
    
  10. Install Java Virtual Machine
    The adjv1020.sql or adjv1020_nt.sql script installs the Java Virtual Machine (JVM) in the database. On the target database server node, use SQL*Plus to connect to the target database instance as SYSTEM and run adjv1020.sql (UNIX/Linux) or adjv1020_nt.sql (Windows).
    Here is an example on UNIX or Linux:

    $ sqlplus system/[system password] @adjv1020.sql
    

    Attention: This script can be run only once in a given database instance, because the scripts that it calls are not rerunnable.
  11. Install other required components
    The admsc1020.sql or admsc1020_nt.sql script installs the following required components in the database: ORD, Spatial, XDB, OLAP, Data Mining, interMedia, and ConText. On the target database server node, use SQL*Plus to connect to the target database instance as SYSTEM and run admsc1020.sql (UNIX/Linux) or admsc1020_nt.sql (Windows). You must pass the following arguments to the script, in the order specified:

    ArgumentValue
    remove context?FALSE
    SYSAUX tablespaceSYSAUX
    temporary tablespaceTEMP
    Here is an example on UNIX or Linux:

    $ sqlplus system/[system password] \
        @admsc1020.sql FALSE SYSAUX TEMP 
    

    Attention: All of the components are created in the SYSAUX tablespace regardless of where it was installed in the source database.
  12. Install custom RDBMS components (conditional)
    If you have other custom RDBMS components loaded in the source database such as Label Security, install them in the target database. To determine the RDBMS components that are loaded in the source and target databases, use SQL*Plus to connect to the databases as SYSDBA and run the following command:
    SQL> select * from dba_registry;
    
  13. Disable automatic gathering of statistics
    Copy $APPL_TOP/admin/adstats.sql from the administration server node to the target database server node. Use SQL*Plus to connect to the database as SYSDBA and use the following commands to restart the database in restricted mode and run adstats.sql:

    $ sqlplus "/ as sysdba"
    SQL> shutdown normal;
    SQL> startup restrict;
    SQL> @adstats.sql
    SQL> exit;
    
  14. Back up the target database instance
    The target database instance is now prepared for an import of the Applications data. You should perform a backup before starting the import.

Section 3: Export the source Release 12 database instance

This section describes how to ensure that you have the required patches, create your export file, and capture important information that is required to import your database.
  1. Create the export parameter file
    A template for the export parameter file has been included as part of the the export/import patch. Copy $AU_TOP/patch/115/import/auexpdp.dat from the source administration server node to the working directory in the source database server node. Use a text editor to modify the file to reflect the source environment and other customized parameters.
    The customizable parameters are:

    ParameterDescriptionTemplate Value
    directorydirectory where the export dump files will be createddmpdir
    dumpfileexport dump file name(s)aexp%U.dmp
    filesizeexport dump file size1GB
    loglog file nameexpdpapps.log
    interMedia, OLAP, and Data Mining schemas are not exported. The admsc1020.sql script creates these schemas in the target database. Ensure that the schema names in the exclude parameters reflect those in your database.
    Create a directory in the SYS schema that corresponds to the directory specified in the template. Here is an example of how to create a directory named dmpdir:
    $ sqlplus "/ as sysdba"
    SQL> create directory dmpdir as '/u01/expimp';
    
    Do not change the other parameters.
    The export process uses as many of the listed file names as necessary to hold the exported data. You must ensure that the number of dump files specified, as well as the size of each dump file, is sufficient to contain all the data in your source database instance.
  2. Shut down Applications server processes
    Shut down all Applications server processes except the database and the Net8 listener for the database. Users cannot use the Applications until the import is completed.
  3. Grant privilege to source system schema
    Grant the exempt access policy privilege to system by using SQL*Plus to connect to the database as SYSDBA and run the following command:

    SQL> grant EXEMPT ACCESS POLICY to system;
    
  4. Export OLAP analytical workspaces (optional)
    The export/import of OLAP analytical workspaces may take up a lot of resources. It may cause memory issues such as bug 10331951. Customers who use OLAP may export/import OLAP through the DBMS_AW package directly as an alternative.
    Perform the detailed steps 1-3 as documented in My Oracle Support Note 352306.1, Upgrading OLAP from 32 to 64 bits, to export OLAP analytical workspaces on the source machine. Copy the export files to the target machine.
  5. Drop XLA packages (optional)
    The export/import of large Sub-ledger Accounting (XLA) packages may take up a long time. The XLA packages can be dropped before the export and re-created after the import to optimize the export/import process.
    On the source database server node, use SQL*Plus to connect to the source database as APPS and run the following to determine the XLA packages:
    $ sqlplus apps/[APPS password]
    SQL> select distinct('drop package '||db.owner||'.'|| db.object_name || ';')
    from dba_objects db, xla_subledgers xl
    where db.object_type='PACKAGE BODY' and db.object_name like 'XLA%AAD%PKG'
    and substr(db.object_name,1,9) = 'XLA_'||
    LPAD(SUBSTR(TO_CHAR(ABS(xl.application_id)), 1, 5), 5, '0')
    and db.object_name NOT IN ('XLA_AAD_HDR_ACCT_ATTRS_F_PKG','XLA_AMB_AAD_PKG')
    order by 1;
    
    Copy the output to SQL*Plus to drop the packages.
  6. Export the Applications database instance
    Start an export session on the source database server node using the customized export parameter file. Use the following command:

    $ expdp "'/ as sysdba'" parfile=[export parameter file name]
    
    Typically, the export runs for several hours.

    Attention: See document 339938.1 on My Oracle Support if you encounter the failure:
    EXP-00056: ORACLE error 932 encountered
    ORA-00932: inconsistent datatypes: expected BLOB, CLOB got CHAR
    EXP-00000: Export terminated unsuccessfully
  7. Export tables with long columns (conditional)
    If you created aulongexp.dat by running aulong.sql in Section 1, start an export session on the source database server node using the following command:

    $ exp parfile=aulongexp.dat
    
  8. Export tables with XML type columns
    Copy $AU_TOP/patch/115/import/auxmlexp.dat from the source administration server to the working directory in the source database server node. Start an export session on the source database server node using the following command:

    $ exp parfile=auxmlexp.dat
    
  9. Import OLAP analytical workspaces (optional)
    If the source database is still to be used and you exported OLAP analytical workspaces, perform the detailed step 7 as documented in My Oracle Support Note 352306.1 to import the OLAP analytical workspaces that were previously exported from the source machine.
  10. Re-create XLA packages (optional)
    If the source database is still to be used and you dropped the XLA packages earlier, copy $XLA_TOP/patch/115/sql/xla6128278.sql from the administration server node to the source working directory, use SQL*Plus to connect to the database as APPS, and run the following script to re-create the XLA packages:
    $ sqlplus apps/[APPS password]
    SQL> @xla6128278.sql [spool log file]
    
  11. Revoke privilege from source system schema
    Revoke the exempt access policy privilege from system by using SQL*Plus to connect to the database as SYSDBA and run the following command:

    SQL> revoke EXEMPT ACCESS POLICY from system;
    

Section 4: Import the Release 12 database instance

This section describes how to use the import utility to load the Oracle Applications data into the target database.
  1. Create the import parameter files
    Copy auimpdp.dat, aufullimp.dat, and auimpusr.dat from the $AU_TOP/patch/115/import directory in the source administration server node to the working directory in the target database server node. Make sure that the directory, dumpfile, and logfile parameters in auimpdp.dat and auimpusr.dat are set properly.
    Create a directory in the system schema with the name set to the directory specified in the template and the path set to where the export dump files will reside. Here is an example of how to create a directory named dmpdir:
    $ sqlplus system/[system password] 
    SQL> create directory dmpdir as '/u01/expimp';
    
    Save the changed file.
  2. Copy the export dump files
    Copy the export dump files from the source database server node to the working directory in the target database server node.
  3. Import the users into the target database (conditional)
    If you exported the long columns in Section 3, start an import session on the target database server node using the customized import parameter file. Use the following command:

    $ impdp system/[system password] parfile=auimpusr.dat
    
  4. Import tables with long columns into the target database (conditional)
    If you exported the long columns in Section 3, modify the aufullimp.dat file with the following:
    1. Set userid to "sys/[sys password] as sysdba".
    2. Set file to the dump file containing the long tables (longexp by default).
    3. Set the log file appropriately.
    4. Leave the ignore parameter commented out.
    Import the tables using the following command:
    $ imp parfile=aufullimp.dat
    

    Attention: You will get failures for the triggers as the dependent tables have not yet been imported.
  5. Import the Applications database instance
    If you did not export the long columns in Section 3, remove or comment out all the exclude parameters in the auimpdp.dat parameter file. Start an import session on the target database server node using the auimpdp.dat parameter file. Use the following command:

    $ impdp "'/ as sysdba'" parfile=auimpdp.dat
    
    Typically, import runs for several hours.
  6. Import triggers into the target database (conditional)
    If you exported the long columns in Section 3, modify the aufullimp.dat file with the following:
    1. Set userid to "sys/[sys password] as sysdba".
    2. Set file to the dump file containing the long tables (longexp by default).
    3. Change the log file name.
    4. Uncomment the ignore parameter.
    5. Add a line with the parameter "rows=n".
    Start an import session on the target database server node using the customized import parameter file. Use the following command:

    $ imp parfile=aufullimp.dat
    
  7. Import OLAP analytical workspaces (conditional)
    If you exported OLAP analytical workspaces, perform the detailed step 7 as documented in My Oracle Support Note 352306.1 to import the OLAP analytical workspaces that were previously exported from the source machine.
  8. Revoke privilege from target system schema
    Revoke the exempt access policy privilege from system by using SQL*Plus to connect to the database as SYSDBA and run the following command:

    SQL> revoke EXEMPT ACCESS POLICY from system;
    

Section 5: Update the imported Release 12 database instance

This section describes how to recreate the database objects and relationships that are not handled by the export and import utilities.
  1. Reset Advanced Queues
    Copy the auque2.sql script that was generated in Section 1 from the working directory in the source database server node to the working directory in the target database server node. Then, on the target database server node, as the owner of the Oracle 10g file system and database instance, use SQL*Plus to connect to the target database as SYSDBA and run the auque2.sql script to enable the Advanced Queue settings that were lost during the export/import process. The script creates a log file in the current directory.
    $ sqlplus /nolog
    SQL> connect / as sysdba;
    SQL> @auque2.sql
    
  2. Start the new database listener (conditional)
    If the Oracle Net listener for the database instance in the new Oracle home has not been started, you must start it now. Since AutoConfig has not yet been implemented, start the listener with the lsnrctl executable (UNIX/Linux) or Services (Windows). See the Oracle Database Net Services Administrator's Guide, 10g Release 2 (10.2) for more information.

    Attention: Set the TNS_ADMIN environment variable to the directory where you created your listener.ora and tnsnames.ora files.
  3. Run adgrants.sql
    Copy $APPL_TOP/admin/adgrants.sql (adgrants_nt.sql for Windows) from the administration server node to the working directory in the database server node. Use SQL*Plus to connect to the database as SYSDBA and run the script using the following command:
    $ sqlplus "/ as sysdba" @adgrants.sql (or adgrants_nt.sql) \
        [APPS schema name]
    

    Note: Verify the usage of adgrants.sql in the adgrants.sql script. Older versions of adgrants.sql require the APPLSYS schema name parameter to be passed instead of APPS.
  4. Grant create procedure privilege on CTXSYS
    Copy $AD_TOP/patch/115/sql/adctxprv.sql from the administration server node to the database server node. Use SQL*Plus to connect to the database as APPS and run the script using the following command:
    $ sqlplus apps/[APPS password] @adctxprv.sql \
        [SYSTEM password] CTXSYS
    
  5. Apply patch 6494466 (conditional)
    If the target database is Windows and the source is not, apply patch 6494466 on the target database tier. Create the appsutil directory if needed.
  6. Deregister the current database server (conditional)
    If you plan to change the database port, host, SID, or database name parameter on the database server, you must also update AutoConfig on the database tier and deregister the current database server node.
    Use SQL*Plus to connect to the database as APPS and run the following command:
    $ sqlplus apps/[APPS password]
    SQL> exec fnd_conc_clone.setup_clean;
    
  7. Implement and run AutoConfig
    Implement and run AutoConfig in the new Oracle home on the database server node. If the database listener of the new Oracle home is defined differently than the old Oracle home, you must also run AutoConfig on each application tier server node to update the system with the new listener.
    See Using AutoConfig to Manage System Configurations in Oracle Applications Release 12 on My Oracle Support, especially section 3.2, for instructions on how to implement and run AutoConfig.
    Shut down all processes, including the database and the listener, and restart them to load the new environment settings.
  8. Gather statistics for SYS schema
    Use SQL*Plus to connect to the database as SYSDBA and use the following commands to restart the database in restricted mode, run adstats.sql, and restart the database in normal mode:

    $ sqlplus "/ as sysdba"
    SQL> shutdown normal;
    SQL> startup restrict;
    SQL> @adstats.sql
    SQL> shutdown normal;
    SQL> startup;
    SQL> exit;
    

    Attention: Make sure that you have at least 1.5 GB of free default temporary tablespace.
  9. Re-create custom database links (conditional)
    If the Oracle Net listener in the 10.2.0 Oracle home is defined differently than the one used by the old Oracle home, you must re-create any custom self-referential database links that exist in the Applications database instance. To check for the existence of database links, use SQL*Plus on the database server node to connect to the Applications database instance as APPS and run the following query:

    $ sqlplus apps/[apps password]
    SQL> select db_link from dba_db_links;
    
    The EDW_APPS_TO_WH and APPS_TO_APPS database links, if they exist, should have been updated with the new port number by AutoConfig in the previous step.
    If you have custom self-referential database links in the database instance, use the following commands to drop and re-create them:

    $ sqlplus apps/[apps password]
    SQL> drop database link [custom database link];
    SQL> create database link [custom database link] connect to
         [user] identified by [password] using
         '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=[hostname])
         (PORT=[port number]))(CONNECT_DATA=(SID=[ORACLE_SID])))';
    
    where [custom database link], [user], [password], [hostname], [port number], and [ORACLE_SID] reflect the new Oracle Net listener for the database instance.
  10. Create ConText and AZ objects
    Certain ConText objects and the AZ objects dependent on the tables with XML type columns are not preserved by the import process. The consolidated export/import utility patch that you applied to the source administration server node in Section 1 contains a perl script, dpost_imp.pl, that you can run to generate an AutoPatch driver file. You use this driver file to call the scripts that create these objects. Run the following command:

    $ perl $AU_TOP/patch/115/bin/dpost_imp.pl [driver file] 10
    
    Once the driver file has been generated, use AutoPatch to apply it on the target administration server node.
  11. Import tables with XML type columns into the target database
    Modify the aufullimp.dat file with the following:
    1. Set userid to "az/[az password]"
    2. Set file to the dump file containing the tables with XML types (xmlexp by default).
    3. Change the log file name.
    4. Comment out the ignore and rows parameters.
    Start an import session on the target database server node using the customized import parameter file. Use the following command:

    $ imp parfile=aufullimp.dat
    
    Once the import is complete, you can delete the export dump files, as well as the export and import parameter files, from the source and target database server nodes.
  12. Populate CTXSYS.DR$SQE table
    To populate the CTXSYS.DR$SQE table, use SQL*Plus on the database server node to connect to the Applications database instance as APPS and run the following command:

    $ sqlplus apps/[apps password]
    SQL> exec icx_cat_sqe_pvt.sync_sqes_for_all_zones;
    
  13. Compile invalid objects
    On the target database server node, as the owner of the Oracle 10g file system and database instance, use SQL*Plus to connect to the target database as SYS and run the $ORACLE_HOME/rdbms/admin/utlrp.sql script to compile invalid objects.

    $ sqlplus "/ as sysdba" @$ORACLE_HOME/rdbms/admin/utlrp.sql
    
  14. Re-create XLA packages (conditional)
    If you dropped the XLA packages in the source environment, copy $XLA_TOP/patch/115/sql/xla6128278.sql from the administration server node to the target working directory, use SQL*Plus to connect to the database as APPS, and run the following script to re-create the XLA packages:
    $ sqlplus apps/[APPS password]
    SQL> @xla6128278.sql [spool log file]
    
  15. Maintain Applications database objects
    Run AD Administration on the target administration server node. From the Maintain Applications Database Objects menu, perform the following tasks:

    1. Compile flexfield data in AOL tables
    2. Recreate grants and synonyms for APPS schema
  16. Start Applications server processes
    Start all the server processes on the target Applications system. You can allow users to access the system at this time.
  17. Create DQM indexes
    Create DQM indexes by following these steps:
    1. Log on to Oracle Applications with the "Trading Community Manager" responsibility
    2. Click Control > Request > Run
    3. Select "Single Request" option
    4. Enter "DQM Staging Program" name
    5. Enter the following parameters:
      • Number of Parallel Staging Workers: 4
      • Staging Command: CREATE_INDEXES
      • Continue Previous Execution: NO
      • Index Creation: SERIAL
    6. Click "Submit"

Change Record

The following sections were changed in this document.
DateSummary of Changes
10-Dec-2007Initial release
11-Jan-2008Changed export/import patch 6258200 to 6723741
3-Jul-2008
  • Modified AutoConfig related instructions

  • Added step to populate CTXSYS.DR$SQE table

  • Added patch 6494466

  • Updated export/import patch to 6924477

  • Added instructions related to the exempt access policy grant.

  • Added aucrdb.sql attention box

  • 25-May-2009Added attention statement to see 339938.1 when encountering ORA-932
    11-Sep-2009
  • Added Database Vault information

  • Added revoke exempt access policy for target system

  • 16-Nov-2009
  • Modified OracleMetaLink to My Oracle Support

  • Modified AutoConfig step numbers as step numbers in AutoConfig has changed

  • Modified adgrants.sql to run with APPS parameter

  • Modified export/import patch to 7120092

  • Incorporating 12.1 into the document

  • Modified step to create 9idata directory to ensure directory exists

  • Replaced Interoperability note links from 454750.1 to 812362.1

  • 2-Jul-2010
  • Made all steps related to export/import of long columns conditional on RDBMS version

  • Changed expdp and impdp to run as SYS schema

  • 6-Feb-2012
  • Added optional steps to export/import OLAP separately

  • Changed export/import patch from 7120092 to 13023290

  • Changed deregistering of database node to run fnd_conc_clone instead

  • Included optional step to re-create XLA packages

  • Added step to synchronize CTX indexes

  • Added step to install custom RDBMS components

  • Note 454616.1 by Oracle Applications Development
    Copyright 2007 Oracle USA
    Last modified: Monday, February 6, 2012

    Show Related Information Related

    Products
    • Oracle E-Business Suite > Applications Technology > Technology Components > Oracle Applications Technology Stack
    Errors
    XP-56; XP-0; EXP-0; EXP-56; ORA-932

    Back to topBack to top