23 March 2011

Steps to move db from one server to another server in SOLARIS,AIX,LINUX

Steps to move db from one server to another server in SOLARIS,AIX,LINUX :

Pre-requisites:

*OS should be same version in both servers

Step 1: Check whether source db Oracle version software is installed in target server.

 If not, install same Oracle software version in target server.

Ex: If 10.2.0.3 in Source server, install 10.2.0.3 in target server also

Step 2: Copy the following files from source server to target server

        1. control files
        2. redo log files
        3. datafiles

Step 3:
Copy init<sid>.ora/spfile<sid>.ora and orapw<sid> from source server to target server in $ORACLE_HOME/dbs folder

Step 4: Rename control file location in init<sid>.ora.

If spfile<sid>.ora is only there, then create pfile from spfile and rename then create spfile from pfile.

Step 5: In $ORACLE_BASE/admin folder create folder with name <SID>.

        Under this folder create adump, bdump, cdump and udump.

Ex: /oracle/home$cd admin

    /oracle/home/admin$mkdir <SID>

Go to folder created with SID

/oracle/home/admin/<sid>$mkdir adump,bdump,cdump,udump

Step 6: Startup mount

SQL>sqlplus "/as sysdba"
SQL>startup nomount
SQL> alter database mount;

Step 7: Modify following script to generate rename datafile:

select 'alter database rename file ''' || name || ''' to ''' || '/<datafile_dir>' || substr(name,6,length(name)) || ''';' from v$datafile;

select 'alter database rename file ''' || member || ''' to ''' || '/<datafile_dir>' || substr(member,6,length(member)) || ''';' from v$logfile;

Step 8: rename the datafile, redo log file

Run the scripts generated in Step 7.

Step 9: Rename temp file.

For renaming temporary tablespace :

select 'alter database rename file ''' || name || ''' to ''' || '/<datafile_dir>' || substr(name,6,length(name)) || ''';' from v$tempfile;

If it is Oracle 9i or less, then create new temporary tablespace.

If existing Temporary tablespace is TEMP and datafile is temp01.dbf, then create tablespace with another name :

SQL>CREATE TEMPORARY TABLESPACE temp1 TEMPFILE '/data/temp02.dbf' SIZE 500M;

SQL>alter database default temporary tablespace <new temporary tablespace name>;

Then drop the old temporary tablespace.

Step 10:

SQL>alter database open
  
Step 11:

Make necessary changes in listener.ora and tnsnames.ora and start the listener.

18 March 2011

STEPS TO CONFIGURE DATAGUARD in 10G

STEPS TO CONFIGURE DATAGUARD in 10G

Assuming :

Primary instance is running and oracle software of same version is installed in the standby server.
Primary database name : colab
Standby database name : colabsby

1. Physical standby database creation :

a. Preparing the Primary Database for Standby Database Creation:

Before we create a standby database you must first ensure the primary database is properly configured.

1. Enable Force logging
2. Create a password File
3. Create standby Redo log file groups
4. Set Primary Database Initialization Parameters
5. Enable archiving

1.Enable Force logging :

Place the primary database in FORCE LOGGING mode after database creation using the following SQL
statement:

SQL> select force_logging from v$database;

FORCE_LOG
---------
NO

SQL> ALTER DATABASE FORCE LOGGING;

2.Create a password File :

Create a password file if one does not already exist.

3.Create standby Redo log file groups :

SQL> alter database add standby logfile size 50M;
Database altered.
SQL> alter database add standby logfile size 50M;
Database altered.
SQL> alter database add standby logfile size 50M;
Database altered.
SQL> alter database add standby logfile size 50M;
Database altered.

4.Set Primary Database Initialization Parameters :

SQL> alter system set log_archive_config='dg_config=(colab,colabsby)';
System altered.

Edit pfile and add the following parameters :

DB_UNIQUE_NAME=colab
LOG_ARCHIVE_CONFIG='DG_CONFIG=(colab,colabsby)'
LOG_ARCHIVE_DEST_1=
'LOCATION=/data3/oradata/colab/arch
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=colab'
LOG_ARCHIVE_DEST_2=
'SERVICE=colabsby LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=colabsby'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
*.fal_client='colab'
*.fal_server='colabsby'
*.DB_FILE_NAME_CONVERT='/data3/oradata/colabsby','/data3/oradata/colab'
*.LOG_FILE_NAME_CONVERT='/oracle/flash_recovery_area/COLABSBY/onlinelog','/oracle/flash
_recovery_area/COLAB/onlinelog'
*.standby_file_management='AUTO'

SQL> startup pfile
SQL> create spfile from pfile;
File created.
SQL> shutdown immediate
SQL> startup
SQL> alter system set standby_file_management = AUTO;
System altered.

5.Enable archiving :

SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /data1/oracle/admin/colab/arch
Oldest online log sequence 16
Current log sequence 18
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
Database mounted.
SQL> alter database archivelog;
Database altered.
SQL> alter database open;
Database altered.
SQL> archive log list

Edit the tnsnames.ora & listener.ora in primary to add standby database details.

Edit the tnsnames.ora & listener.ora in standby to add standby database details or we can copy the files
from primary database.

b. Step-by-Step Instructions for Creating a Physical Standby Database :

1.Create a backup copy of the primary database Datafiles:

We can use any kind of primary database backup viz COLD,HOT.

2.Create a Control File for the standby database:

In primary :

SQL>SHUTDOWN IMMEDIATE
SQL> STARTUP MOUNT;

create the control file for the standby database, and open the primary database to user access.

SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/colabsby.ctl';
SQL> ALTER DATABASE OPEN;

3.Prepare an Initialization parameter File for the Standby Databases:

Create pfile from spfile used by the primary database.
Sql>create pfile='/tmp/initcolabsby.ora' from spfile;

Copy this pfile to standby database and modify the following parameters accordingly :

Ex:
db_name='colab'
DB_FILE_NAME_CONVERT='/data1/oracle/oradata/colab','/oracle/oradata/colabsby'
DB_UNIQUE_NAME='colabsby'
fal_client='colabsby'
fal_server='colab'
LOG_ARCHIVE_DEST_1='LOCATION=/oracle/admin/colabsby/arch
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=colabsby'
LOG_ARCHIVE_DEST_2='SERVICE=colab LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=colab'
LOG_ARCHIVE_DEST_STATE_1='ENABLE'
LOG_ARCHIVE_DEST_STATE_2='ENABLE'
LOG_ARCHIVE_FORMAT='%t_%s_%r.arc'
LOG_ARCHIVE_MAX_PROCESSES=30
LOG_FILE_NAME_CONVERT='/data1/oracle/flash_recovery_area/COLAB/onlinelog','/oracle/flash_r
ecovery_area/COLABSBY/onlinelog'

4.Copy files from the Primary system to the standby system:

Use an operating system copy utility to copy the following binary files from the primary system to the
standby system:

. Backup Datafiles
. Standby controlfile
. Initialization parameter file

5.Set Up the Environment to Support the Standby Database:

Create a password file if its required and configure the listeners for both primary and standby.

$ lsnrctl stop
$ lsnrctl start

Try to ping both the databases from both servers using TNSPING utility

ex: tnsping colab
tnsping colabsby

6.Start the Physical Standby Database:

On the standby database :

$sqlplus "/as sysdba"
SQL> startup nomount pfile=/oracle/product/10.2.0.1/dbs/initcolabsby.ora
SQL> create spfile from pfile;
File created.
SQL> shutdown immediate
SQL> startup mount

* To start Redo Apply :

SQL> alter database recover managed standby database disconnect from session;
Database altered.

7.Verify the Physical Standby Database Is Performing Properly:

On the Primary & Standby run this to identify existing archived redo log files :

SQL>SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# FIRST_TIME APPLIED
-------------- ------------------ -------------
25 06-MAY-10 NO
26 06-MAY-10 YES
27 06-MAY-10 YES
28 06-MAY-10 YES
29 06-MAY-10 YES

On the primary database issue the following statement to force a log switch and archive the current online
redo log file group :

SQL>ALTER SYSTEM SWITCH LOGFILE; or ALTER SYSTEM ARCHIVE LOG CURRENT;

Now on the primary & standby database, query the V$ARCHIVED_LOG view to verify the redo data
was received and archived on the standby database:

Primary :
SQL>SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# FIRST_TIME APPLIED
---------------- --------------- --------------
25 06-MAY-10 NO
26 06-MAY-10 YES
27 06-MAY-10 YES
28 06-MAY-10 YES
29 06-MAY-10 YES
30 06-MAY-10 NO

Standby :

SQL>SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# FIRST_TIME APPLIED
---------------- --------------- --------------
25 06-MAY-10 NO
26 06-MAY-10 YES
27 06-MAY-10 YES
28 06-MAY-10 YES
29 06-MAY-10 YES
30 06-MAY-10 NO

Now the standby database is configured properly and it is receiving archived logs from the primary database.

NEXT STEP is to configure Data Guard broker to perform switch over when primary or secondary fails.

1. IN PRIMARY :

SQL> alter system set dg_broker_config_file1='/data1/oracle/product/10.2.0.1/dbs/dr1colab.dat' scope=spfile;
SQL> alter system set dg_broker_config_file2='/data1/oracle/product/10.2.0.1/dbs/dr2colab.dat' scope=spfile;
SQL> alter system set dg_broker_start=true sid='*';
SQL> sho parameter dg_broker_start
SQL> sho parameter dg_broker_config

Check whether the DMON process is started or not :

-bash-3.00$ ps -ef|grep dmon|grep -v grep
oracle 1830 1 0 14:02:43 ? 0:00 ora_dmon_colab

2. In STANDBY :

SQL> alter system set dg_broker_config_file1='/oracle/product/10.2.0.1/dbs/dr1colabsby.dat' scope=spfile;
SQL> alter system set dg_broker_config_file2='/oracle/product/10.2.0.1/dbs/dr2colabsby.dat' scope=spfile;
SQL> alter system set dg_broker_start=true sid='*';
SQL> sho parameter dg_broker_start
SQL> sho parameter dg_broker_config.

Check whether the DMON process is started or not :

bash-2.05$ ps -ef|grep dmon|grep -v grep
oracle 11427 1 0 01:23:35 ? 0:00 ora_dmon_colabsby

IN PRIMARY :

-bash-3.00$ dgmgrl

DGMGRL for Solaris: Version 10.2.0.1.0 - 64bit Production
Copyright (c) 2000, 2005, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys
Password:
Connected.

DGMGRL> create configuration 'DGcolab1' as
> primary database is 'colab'
> connect identifier is colab;
Configuration "DGCOLAB1" created with primary database "colab"

DGMGRL> show configuration
Configuration
Name: DGCOLAB1
Enabled: NO
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
ccollab - Primary database
Current status for "DGCOLAB1":
DISABLED

DGMGRL> add database 'colabsby' as
> connect identifier is colabsby
> maintained as physical;
Database "colabsby" added

DGMGRL> show configuration
Configuration
Name: DGcolab1
Enabled: NO
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
colab - Primary database
colabsby - Physical standby database
Current status for "DGcolab1":
DISABLED

DGMGRL>ENABLE CONFIGURATION;
ENABLED

DGMGRL> show configuration
Configuration
Name: DGcolab1
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
colab - Primary database
colabsby - Physical standby database
Current status for "DGcolab1":
SUCCESS

The new Data Guard broker configuration will now be tested by performing a switchover operation :

DGMGRL> switchover to colabsby;

Performing switchover NOW, please wait .......
New primary database 'colabsby' is opening .......
Operation requires shutdown of instance “colab” on database “colab”
Shutting down instance “colab” ......
...........................................................................
...........................................................................
...........................................................................
...........................................................................
Switchover succeeded, new primary is 'colabsby'

DGMGRL> show configuration
Configuration
Name: DGcolab1
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
colabsby - Primary database
colab - Physical standby database
Current status for "DGcolab1":
SUCCESS

Following the switchover database have swapped the roles.

Login to standby database and check for confirmation :

SQL>select database_role from v$database;

DATABASE_ROLE
---------------------------
PRIMARY


/*END*/

09 March 2011

How to Apply patch 10.2.0.5.4 in OEM Grid control

To apply patch 10.2.0.5.4 in OEM Grid control :

Download the patch p9786002_10205_Generic.zip from Metalink and copy to desired directory.

Follow these steps:

1.
Please set your <OMS_HOME> to ORACLE_HOME.

>Set ORACLE_HOME=C:\OracleHomes\oms10g

$export ORACLE_HOME=/home/OracleHomes/oms10g

Please set <ORACLE_HOME>/bin and <ORACLE_HOME>/OPatch to your PATH envirnoment variable.

In windows add this in Environment variable.

In AIX,Solaris etc, add this to your PATH variable as below :

export PATH=$PATH:/home/OracleHomes/oms10g/bin:/home/OracleHomes/oms10g/Opatch

2.
Shutdown OMS using the following command.
In case of multi-OMS environment, shutdown on all OMS machines.

$emctl stop oms

Leave EM Repository database and its listener running.

3.
Set your current directory to the directory where the patch is located and then run the OPatch utility by entering the following commands:

$unzip p9786002_10205_Generic.zip

$cd 9786002

$opatch napply -skip_subset

Ex:

C:\Dinaz\oracle_dump\p9786002_102050_Generic\9786002\9786002>opatch napply -skip_subset

Invoking OPatch 10.2.0.4.6
Oracle Interim Patch Installer version 10.2.0.4.6
Copyright (c) 2009, Oracle Corporation.  All rights reserved.
UTIL session
Oracle Home       : C:\OracleHomes\oms10g
Central Inventory : C:\Program Files\Oracle\Inventory
   from           : n/a
OPatch version    : 10.2.0.4.6
OUI version       : 10.2.0.5.0
OUI location      : C:\OracleHomes\oms10g\oui
Log file location : C:\OracleHomes\oms10g\cfgtoollogs\opatch\opatch2010-08-31_16
-01-20PM.log
Patch history file: C:\OracleHomes\oms10g\cfgtoollogs\opatch\opatch_history.txt
Invoking utility "napply"
Checking conflict among patches...
Checking if Oracle Home has components required by patches...
Checking skip_subset
Checking conflicts against Oracle Home...
OPatch continues with these patches:   9786002
Do you want to proceed? [y|n]y
User Responded with: Y
Running prerequisite checks...
OPatch detected non-cluster Oracle Home from the inventory and will patch the lo
cal system only.
Backing up files affected by the patch 'NApply' for restore. This might take a w
hile...
Applying patch 9786002...
ApplySession applying interim patch '9786002' to OH 'C:\OracleHomes\oms10g'
Backing up files affected by the patch '9786002' for rollback. This might take a
 while...
Patching component oracle.sysman.top.oms, 10.2.0.5.0...
Copying file to "C:\OracleHomes\oms10g\sysman\admin\rollback_9786002.lst"
ApplySession adding interim patch '9786002' to inventory
Verifying the update...
Inventory check OK: Patch ID 9786002 is registered in Oracle Home inventory with
 proper meta-data.
Files check OK: Files from Patch ID 9786002 are present in Oracle Home.
The local system has been patched and can be restarted.
UtilSession: N-Apply done.
OPatch succeeded.

4.
Start the oms

From OMS_HOME

$emctl start oms

C:\OracleHomes\oms10g\BIN>emctl start oms
 If OEM is not able to acces after starting OMS, try restarting OPMN:

C:\OracleHomes\oms10g\opmn\bin>opmnctl stopall

C:\OracleHomes\oms10g\opmn\bin>opmnctl startall

OEM GRID CONTROL UPGRADE FROM 10.2.0.1 to 10.2.0.5 (Part 2)

OEM Grid Control Upgrade Continued :

Installation Procedure:

For upgrading to Enterprise Manager Grid Control 10.2.0.x.x to 10.2.0.5, you must manually download from metalink and extract the 10.2.0.5 Patch Set.

Download and extract the Patch Set:.

1. Download the p3731593_102050_<platform name>.zip patch set installation archive to any directory.

2. Enter the following command to unzip and extract the installation files:
$ unzip p3731593_102050_<platform name>.zip
This extracts the files to the "3731593" directory.
NOTE:
The same Patch Set can be used for upgrading Oracle Management Service, Grid Control Repository (sysman schema), and Management Agent.

Upgrading Oracle Management Service:
The pre-installation tasks can be broadly categorized based on the following types of upgrade that you can perform:

• Upgrading First Oracle Management Service and repository
• Upgrading Additional Oracle Management Services
• Upgrading Management Agent


Upgrading The Oracle Management Service And Repository:

To upgrade multiple Oracle Management Services (OMS), shut down all the OMS's, apply the 10.2.0.5 patch set to any one of the OMSs, so that the associated Management Repository (sysman schema) is also upgraded, then apply the patch set to the rest of the Oracle Management Services.

Note:
This Patch set will upgrade only the Grid Control repository (sysman schema) and not the database, which contains the Grid Control repository.

Be sure that you first upgrade the Oracle Management Service (OMS), and then the Management Agent before upgrading the database that contains the Grid Control repository.


The steps to be taken before upgrading the first OMS are:

• While applying the OMS patch set, leave the repository database and listener instance running.
• Make sure that you follow all the pre-installation tasks before going ahead with upgrade process.
• Shut down all OMS instances attached to the Grid Control repository, from the respective Oracle homes in their respective hosts.

Also stop the Application Server components that run in each Oracle home directory of the Oracle Management Service (OMS).

Note:
If you have multiple Oracle Management Services then you must stop all the OMS instances. When you upgrade the first OMS, the Patch Set also upgrades the Grid Control repository, and since the other OMS instances connect to the same Grid Control repository, they must also be stopped.

Stopping the Management Agents is not mandatory, and as a result, there may be an increase in the number of Agent-related log files. However, this is harmless and can be ignored.

To stop all the Grid Control components on a host, follow these steps:

1. Stop the Oracle Management Service (OMS) by running the following command from the Oracle home directory of the OMS, where $ORACLE_HOME is the Oracle home directory of the Oracle Management Service:
$ORACLE_HOME/bin/emctl stop oms
C:\OracleHomes\oms10g\BIN>emctl stop oms

2. Stop the Application Server Control Console, which is used to manage the Oracle Application Server instance used to deploy the Management Service:

$ORACLE_HOME/bin/emctl stop iasconsole
C:\OracleHomes\oms10g\BIN>emctl stop iasconsole


3. Stop all the application server components, such as Oracle HTTP Server and OracleAS Web Cache by running the following command from the Oracle home directory of the OMS.

$ORACLE_HOME/opmn/bin/opmnctl stopall
C:\OracleHomes\oms10g\opmn\bin>opmnctl stopall
 4. Wait for sometime to ensure that all the OPMN processes are stopped and TCP ports are released.

5. Set the Oracle_Home to your Oracle home directory of the Oracle Management Services before applying the patch set.
• Apply the 10.2.0.5 Patch Set either interactively by executing the runInstaller (For Microsoft Windows, setup.exe) or silently by using the response file.

After you have upgraded the first OMS and the Grid Control repository (sysman schema) to version 10.2.0.5, the first OMS starts up automatically and you must then patch the unpatched OMS's to version 10.2.0.5.

Warning:
Do not start an OMS that has not been patched to version 10.2.0.5.


For example: assume there are four version 10.2.0.2 Oracle Management Services labeled OMS A, B, C, and D. In the previous section, all four OMS's are down and you decide to patch OMS B first. After you patch OMS B to version 10.2.0.5, the Enterprise Manager Repository is also patched and OMS B is now up and running. However OMS A, C, and D remain down.

You can then patch OMS A, C, and D in parallel or in serial to version 10.2.0.5 but you must not bring up the unpatched version of those OMS's at any time prior to patching them to version 10.2.0.5.


Upgrading Management Agent:
The Management Agent can be upgraded in two ways - either one host at a time using Oracle Universal Installer, or many hosts at a time using Patch Wizard from Grid Control.
Upgrading Management Agent - One Host At A Time
To upgrade Management Agents, one host at a time, using OUI, follow these steps:

1. Login to the specific host where the Management Agent is running.
2. Change directory to the Oracle home directory for the Management Agent and stop the Management Agent. For cluster Agent, stop the Agent on all cluster nodes.
$ORACLE_HOME/bin/emctl stop agent
C:\OracleHomes\agent10g\BIN>emctl stop agent
3. Ensure that $ORACLE_HOME is set to the Oracle home directory of the Management Agent that is intended for patching.
Set oracle_home = C:\OracleHomes\agent10g
4. Extract the software and then run the ./runInstaller executable from the 3731593/Disk1 subdirectory. (For Microsoft Windows, run setup.exe) and proceed with the upgrade.


Post Installation Tasks:If you had upgraded multiple Oracle Management Services (OMS) by applying the patch set to one of the OMS's first, and then subsequently upgraded the remaining OMS's, the OMS restarts automatically once the upgrade is complete. If the Agents are shutdown prior to the OMS upgrade, they must be restarted manually.

Start all the agents :

1.C:\OracleHomes\agent10g\BIN>emctl status agent
If its not up start the agent with command “ emctl start agent”

2.C:\OracleHomes\oms10g\BIN>emctl status oms
If its not up start the oms with command “ emctl start oms”

3.C:\OracleHomes\oms10g\BIN>emctl status iasconsole
Oracle Enterprise Manager 10g Release 5 Grid Control
Copyright (c) 1996, 2009 Oracle Corporation.  All rights reserved.
Oracle 10g Application Server Control is not running.

C:\OracleHomes\oms10g\BIN>emctl start iasconsole

4.C:\OracleHomes\oms10g\opmn\bin>opmnctl startall
opmnctl: starting opmn and all managed processes...
sys password : dba
ias_admin password : dba123

01 March 2011

OEM GRID CONTROL UPGRADE FROM 10.2.0.1 to 10.2.0.5 (Part 1)

1. Pre-installation Tasks

    The following are the general pre-installation tasks.

1. Back up the Oracle home that will be upgraded using this patch set. In the case of a Management Repository, Oracle recommends to back up the repository database before applying the patch set because the patch set makes changes to the repository that cannot be rolled back. Also, back up the Oracle Inventory directory.

2. Make sure that you meet the product pre-requisites (operating system patches, packages etc.) for the 10.2.0.5 patch set.

3. Before upgrading your Oracle Management Service, ensure that the shared pool size is set in your Grid Control repository database as per your infrastructure. Minimum size for shared pool is 512 MB.

4. If the database housing the Grid Control repository is linked with a Data Guard database, the database must be forced in logging mode before the upgrade starts to force all index maintenance commands to be propagated to the standby database.

The upgrade by default will perform the index maintenance commands in a 'NOLOGGING' mode. This means that any Data Guard database linked to the repository database will not have these index operations replicated to it.

To make sure these commands get replicated to the Data Guard instance, force the database into logging mode:

SQL> CONNECT / AS SYSDBA;
SQL> ALTER DATABASE FORCE LOGGING;


Once the OMS upgrade is successful then revert the Data Guard force logging changes:

SQL> CONNECT / AS SYSDBA;
SQL> ALTER DATABASE NO FORCE LOGGING;


5. If your Enterprise Manager is shut down for a long period of time, you must follow the below steps before conducting the upgrade:

a. Log into the Repository Database as SYSMAN

b. Run the following SQL:
   

   SQL> exec emd_maintenance.analyze_emd_schema('SYSMAN');

c. Start the Oracle management service
   Go to /OracleHomes/oms10g/BIN
   
$./emctl start oms

6. Make sure that there are no invalid sysman objects present in Grid Control repository.

Important: Oracle recommends to back up database before you perform the upgrade operation. Perform the following steps before upgrading:

a. Login into the Database as SYS user.

b.Check if there are any invalid SYSMAN objects.


SQL> select object_name, object_type from all_objects where owner='SYSMAN' and status <> 'VALID';


The above query should return 0 rows. If there are rows, then run the below SQL statement:SQL> @admin_recompile_invalid.sql SYSMAN

The admin_recompile_invalid.sql script is available under
<ORACLE_HOME>/sysman/admin/emdrep/sql/core/latest/admin/

Where <ORACLE_HOME> is the Oracle home directory of the Oracle Management Service (OMS).

c. Go to step b again to be sure all SYSMAN objects are valid. If you still have invalid SYSMAN objects that are not in the Recycle bin, contact Oracle support.

d. Check to be sure there is no invalid SYS object:

SQL> select object_name, object_type from all_objects where status<>'VALID' and object_name like 'DBMS%'; 
 The above query should return 0 rows. If there are rows, then try to "recompile" them.
 For example:

if the object_type = "PACKAGE" and object_name = "ABC"
alter package abc compile;
if the object_type = "PACKAGE BODY" and object_name =
"ABC"
alter package abc compile body;


7. If any of the packages fail to become valid even after recompilation, then Contact Oracle Support to assist you. Once this verification is done, continue through with the rest of the pre-installation tasks.

                                                             -------------------> To be continued in Part II