Showing posts with label 11gR2. Show all posts
Showing posts with label 11gR2. Show all posts

Saturday, January 19, 2019

Oracle database refresh


Source db name           = ABC
Destination db name    = DEF

Considering the following:

1. OS version and db version of both the dbs is same.

Taking RMAN backup on source database
Check whether the source DB is running in archivelog mode or not. run the below command
sudo su – <dbuser>
sqlplus / as sysdba
archive log list
Incase the source DB is running in archivelog mode, then do the below steps to take the online backup of source DB using rman:
Copy the below contents into a file /tmp/archivelogdbbkp.rman

RUN {
CROSSCHECK BACKUPSET;
CROSSCHECK BACKUP;
BACKUP CURRENT CONTROLFILE FORMAT '<directory_name>/CONTROLFILE_BACKUP_%d_%T_%t_%s_%p.rman';
CROSSCHECK COPY;
CROSSCHECK ARCHIVELOG ALL;
CROSSCHECK BACKUP;
DELETE NOPROMPT EXPIRED BACKUPSET;
DELETE NOPROMPT EXPIRED BACKUP;
DELETE NOPROMPT EXPIRED COPY;
DELETE NOPROMPT EXPIRED ARCHIVELOG ALL;
ALLOCATE CHANNEL ch00 TYPE DISK;
ALLOCATE CHANNEL ch01 TYPE DISK;
ALLOCATE CHANNEL ch02 TYPE DISK;
ALLOCATE CHANNEL ch03 TYPE DISK;
BACKUP AS COMPRESSED BACKUPSET FORMAT '<directory_name>/DB_BACKUP_%d_%T_%t_%s_%p.rman' DATABASE PLUS ARCHIVELOG;
BACKUP CURRENT CONTROLFILE FORMAT '<directory_name>/CONTROLFILE_BACKUP_%d_%T_%t_%s_%p.rman';
}

Allocate more channel as per need.
Then run the below nohup command to initiate the rman backup
nohup rman target / cmdfile='/tmp/archivelogdbbkp.rman' log='/tmp/archivelogdbbkp.rman' &
Incase the source DB is running in NOARCHIVELOG mode, do the below steps to take the offline rman backup of source DB:
Take the rman backup of ABC database.
Copy the below contents into a file /tmp/noarchivelogdbbkp.rman

RUN{
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
CROSSCHECK BACKUP;
CROSSCHECK BACKUP;
CROSSCHECK COPY;
CROSSCHECK ARCHIVELOG ALL;
DELETE NOPROMPT EXPIRED BACKUPSET;
DELETE NOPROMPT EXPIRED BACKUP;
DELETE NOPROMPT EXPIRED COPY;
DELETE NOPROMPT EXPIRED ARCHIVELOG ALL;
ALLOCATE CHANNEL ch00 TYPE DISK;
ALLOCATE CHANNEL ch01 TYPE DISK;
ALLOCATE CHANNEL ch02 TYPE DISK;
ALLOCATE CHANNEL ch03 TYPE DISK;
ALLOCATE CHANNEL ch04 TYPE DISK;
BACKUP AS COMPRESSED BACKUPSET FORMAT '<directory_name>/DB_BACKUP_%d_%T_%t_%s_%p.rman' DATABASE;
BACKUP CURRENT CONTROLFILE FORMAT '<directory_name>/CONTROLFILE_BACKUP_%d_%T_%t_%s_%p.rman';
ALTER DATABASE OPEN;
Allocate more channel as per need.

Then run the below nohup command to initiate the rman backup
nohup rman target / cmdfile='/tmp/noarchivelogdbbkp.rman' log='/tmp/noarchivelogdbbkp.log' &
Monitor the rman logfile and make sure that there is no ORA or RMAN error occurs in it.
Also, run the below sql query to check the backup progress.
set pages 333 lines 222
select inst_id,sid,SOFAR,TOTALWORK,round(SOFAR*100/TOTALWORK) "Percent Complete",
TIME_REMAINING from gv$session_longops
where sid in
(select sid from gv$session where username is not null /*and status='ACTIVE'*/)
and TIME_REMAINING <> 0
order by 4 desc
/
Once the rman backup is complete, scp all the rman backup files onto  DEF db server. Make sure the db user of DEF db server has got the read access on each of these rman backup files.
Run report schema on ABC db and save its output. it will be used while restoring this backup on DEF db.


On DEF db:

select name,open_mode from v$database;
Take the backup of spfile and pfile 
sudo su - <dbuser>cp $ORACLE_HOME/dbs/spfileDEF.ora /tmp/ 
sqlplus / as sysdba
show parameter pfile;
create pfile='/tmp/initDEF.ora' from spfile;
Shutdown all the DEF database instances.
Change the cluster_database parameter to false in $ORACLE_HOME/dbs/initDEF.ora. (it is only needed if db has more than 1 instances).
startup and mount the database in exclusive mode
sqlplus / as sysdba
startup mount restrict exclusive pfile='$ORACLE_HOME/dbs/initDEF.ora';
--Drop the DEF database
drop database;
Now check that all space has been cleared. (datafiles, logfiles, and control files should be deleted.) Delete them manually if something left.
Make sure your datafiles filesystem or diskgroup of DEF db server has sufficient space to store data of Live database.

Change the name of the pfile as that of source db.
cp /tmp/initDEF.ora $ORACLE_HOME/dbs/initABC.ora
Change the value of db_name parameter to ABC in initABC.ora



Make sure that name of the undo tablespace is same as the name in Live
Make sure that the shared_pool_size is set like *.shared_pool_size.




change the ORACLE_SID env variable to ABC

setenv ORACLE_SID ABC

or 

export ORACLE_SID=ABC



Start up the database in nomount state
Now catalog rman backup files.


catalog start with '<directory location where rman backup files are present on DEF db server>/';"
Check the report schema output earlier taken on ABC db. Get the file_ids and their size from it.
IF DEF db resides on file system based server to store the datafiles

we would need to allocate the datafiles into different FS on the destination server according the size of each FS. e.g. sapdata1 FS has a free space of 100GB and datafile 1, datafile 2 and datafile3 and datafile 4 are of in total size 95GB, then datafile 1 to 4 will be allocated in sapdata1 FS. similary the subsequent datafiles are allocated in the remaining FSs. once this calculation is done, the below rman script should be prepared for reference.
RUN
{
Allocate channel ch00 type disk;
Allocate channel ch01 type disk;
Allocate channel ch02 type disk;
SET NEWNAME FOR DATAFILE 1 TO '/oracleDEF/DEF/sapdata1/datafiles/system.270.831670025';
SET NEWNAME FOR DATAFILE 2 TO '/oracleDEF/DEF/sapdata1/datafiles/sysaux.271.831670051';
SET NEWNAME FOR DATAFILE 3 TO '/oracleDEF/DEF/sapdata1/datafiles/psapundo.267.831669899';
SET NEWNAME FOR DATAFILE 4 TO '/oracleDEF/DEF/sapdata1/datafiles/psapsr3.260.831669393';
SET NEWNAME FOR DATAFILE 5 TO '/oracleDEF/DEF/sapdata2/datafiles/psapsr3.272.831670075';
SET NEWNAME FOR DATAFILE 6 TO '/oracleDEF/DEF/sapdata2/datafiles/psapsr3702.259.831669277';
SET NEWNAME FOR DATAFILE 7 TO '/oracleDEF/DEF/sapdata2/datafiles/psapsr3702.261.831669489';
...
..
.
SET NEWNAME FOR DATAFILE 55 TO '/oracleDEF/DEF/sapdata5/datafiles/psapsr3.312.944651039';
SET NEWNAME FOR DATAFILE 56 TO '/oracleDEF/DEF/sapdata5/datafiles/psapsr3.313.946635317';
SET NEWNAME FOR DATAFILE 57 TO '/oracleDEF/DEF/sapdata5/datafiles/psapsr3.314.947955021';
SET NEWNAME FOR DATAFILE 58 TO '/oracleDEF/DEF/sapdata5/datafiles/psapsr3.315.950772363';
restore database;
switch datafile all;
recover database;
}


allocate more channels as per need.


Now start the restore from the backup. Copy the above contents into a file /tmp/restore.rman
Then run the below nohup command to initiate the rman restore on DEF db server.
nohup rman target / cmdfile='/tmp/restore.rman' log='/tmp/restore.log' &"
Monitor the logfile /tmp/restore.log and make sure that there is no ORA or RMAN error occurs in it.
Also, run the below sql query to check the restore progress.
set pages 333 lines 222
select inst_id,sid,SOFAR,TOTALWORK,round(SOFAR*100/TOTALWORK) ""Percent Complete"",
TIME_REMAINING from gv$session_longops
where sid in
(select sid from gv$session where username is not null /*and status='ACTIVE'*/)
and TIME_REMAINING <> 0
order by 4 desc
/
Once it is completed, check the /tmp/restore.log file and make sure that there is no ORA error came in it.
also, make sure that the all archivelogs present in the rman backup have been successfully applied. otherwise explicitily restore the archivelogs from the backupand recover them manually.


IF DEF db resides on diskgroup based server to store the datafiles, 

normally there is only 1 diskgroup is present to store all the datafiles, so make sure that diskgroup has sufficient space to store all the datafiles of ABC database. 

Say the diskgroup name is +DATA, the below rman script should be prepared for restore:

RUN
{
Allocate channel ch00 type disk;
Allocate channel ch01 type disk;
Allocate channel ch02 type disk;
SET NEWNAME FOR DATAFILE 1 TO '+DATA';
SET NEWNAME FOR DATAFILE 2 TO '+DATA';
SET NEWNAME FOR DATAFILE 3 TO '+DATA';
SET NEWNAME FOR DATAFILE 4 TO '+DATA';
SET NEWNAME FOR DATAFILE 5 TO '+DATA';

...
..
.
SET NEWNAME FOR DATAFILE 55 TO '+DATA';
SET NEWNAME FOR DATAFILE 56 TO '+DATA';
SET NEWNAME FOR DATAFILE 57 TO '+DATA';
SET NEWNAME FOR DATAFILE 58 TO '+DATA';
restore database;
switch datafile all;
recover database;
}

allocate more channels as per need.


Now copy the above contents into a file /tmp/restore.rman
Then run the below nohup command to initiate the rman restore on DEF db server.
nohup rman target / cmdfile='/tmp/restore.rman' log='/tmp/restore.log' &"
Monitor the logfile /tmp/restore.log and make sure that there is no ORA or RMAN error occurs in it.
Also, run the below sql query to check the restore progress.
set pages 333 lines 222
select inst_id,sid,SOFAR,TOTALWORK,round(SOFAR*100/TOTALWORK) ""Percent Complete"",
TIME_REMAINING from gv$session_longops
where sid in
(select sid from gv$session where username is not null /*and status='ACTIVE'*/)
and TIME_REMAINING <> 0
order by 4 desc
/
Once it is completed, check the /tmp/restore.log file and make sure that there is no ORA error came in it.
also, make sure that the all archivelogs present in the rman backup have been successfully applied. otherwise explicitily restore the archivelogs from the backupand recover them manually.



now, check the status of redo log file using

select * from v$log;

if any of the redo log group is in CLEARING status, manually clear that redo log using below command:

alter database clear logfile group < group id of redo log group>;

Also, check the file location of all redo log files using below command:

select * from v$logfile;

Rename all the logfiles as per the FS structure or disk group of DEF database server using below command for each of those logfile:

alter database rename logfile 

Then open the database with resetlogs option.

alter database open resetlogs;

Monday, August 28, 2017

Oracle 11gR2 RAC startup sequence

The below article applies to Oracle RAC 11GR2 and further releases.

Below are the steps in brief:

1.     Startup of ohasd.bin process. it further startup the ohasd.bin process.

2.     ohasd access OLR file and get GPNP Profile location and spawns cssd process.

3.     CSSD daemon access GPNP Profile, get voting disk location, access it and join the cluster.

4.     Ohasd process access the ASM spfile and startup the ASM instance.

5.     Ohasd’s orarootagent process starts up the crsd process. The crsd process reads OCR file and completes the cluster initialization.

6.     The crsd process spawns the oraagent and orarootagent process and these two further spawns the subsequent crs resources processes.


The below are the steps with detailed information.

 1.     Startup of ohasd.bin process.

The init.ohasd process entries are present in /etc/inittab file, so it get started automatically by the OS after server boots up. This entry is added by root.sh script after grid installation. The entry is like below:

 

h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null

 

35 means run level i.e. 

3 is for Multiuser mode with Networking 

5 is for Multiuser mode with networking and GUI

 

It means the OS must have the run level equals to either 3 or 5 to get this process started. We can check the current run level of the OS using “who –r” command.

 

[oracle@edwdev02 ~]$ who -r

         run-level 3  2021-06-24 12:35

[oracle@edwdev02 ~]$

 

respawn means if the process was not started, then start it.

 

The process init has pid=1,

It’s absolute location is /sbin/init on Linux, Solaris and hp-ux OS platforms

Whereas its location is /usr/sbin/init on AIX OS Platforms.

 

[root@node1 ~]# ps -ef|grep init

root         1     0  0 Jun24 ?        00:00:44 /sbin/init

oracle   31134 31026  0 19:29 pts/1    00:00:00 grep init

[root@node1 ~]#

 

This init process starts “init.ohasd”process. This init.ohasd process will further startup ohasd.bin process which is also called ohasd service.

 

2.     ohasd access OLR file and get GPNP Profile location and spawns cssd process.

The OLR file location is present in /etc/oracle/olr.loc. PFB the reference output:

[root@node1 ~]# cat /etc/oracle/olr.loc

olrconfig_loc=/u01/app/11.2.0/grid/cdata/node1.olr

crs_home=/u01/app/11.2.0/grid

 

[root@node1 ~]# ls -ltr /u01/app/11.2.0/grid/cdata/node1.olr

-rw-------. 1 root oinstall 272756736 Jan  8 21:40 /u01/app/11.2.0/grid/cdata/node1.olr

[root@node1 ~]#

 

The OLR file is present in $GRID_HOME/cdata/ directory. It is named as ‘<hostname>.olr’

The ohasd process get the location of OLR file from /etc/oracle/olr.loc and access the main OLR file and gets the below info which is stored in it:

          I.        GPNP file details. It is present in $GRID_HOME/gpnp/<hostname>/profiles/peer/

         II.        OCR latest backup time and location.

        III.        CRS_HOME

       IV.        Active crs version.

        V.        Local host name and crs version

       VI.        Status of the resources to know which are to be started and which are to be kept OFFLINE.

      VII.        Start and stop dependencies of the resouces i.e.

Hard Dependency means it must full-fill

Weak Dependency means it should full-fill.


GPNP Profile stands for Grid Plug and Play Profile. The file is located in CRS_HOME/gpnp/<node_name>/profile/peer/profile.xml. And this profile consists of cluster name, hostname, network profiles with IP addresses, OCR. If we do any modifications for voting disk, profile will be updated.


ohasd.bin process then further spawns:
   1. its orarootagent process.
   2. its oraagent process.
   3. cssdagent process.
   4. cssdmonitor process.

The ohasd’s orarootagent process further spawns the gpnpd process and cssdagent process further spawns the cssd daemon process.

 

3.     CSSD daemon access GPNP Profile, get voting disk location, access it and join the cluster.

 

The cssd daemon then access the local GPNP profile of the node at /u01/app/12.2.0.2/grid/gpnp/node2/profiles/peer/profile.xml. It contains the below info:

 

   i. Name of the ASM Diskgroup containing the Voting Files
   ii. ASM SPFILE location (Diskgroup name)
   iii. ASM Diskgroup Discovery String

 

Fyi below info:
Oracle ASM reserves several physical blocks at a fixed location for every Oracle ASM disk to store the the voting disk file i.e. the physical location of voting disk file is fixed on each asm disk. As a result , Oracle Clusterware can access the voting disks present in ASM even if the ASM instance is down.

The cssd process mainly access the voting disk file’s diskgroup location from it. It then physically scan the headers of each disk present in that diskgroup and find out the disks which contains the voting disk file. Now, it has got the voting disk files, so it will now join the cluster.

 

4.     Ohasd process access the ASM spfile and startup the ASM instance.

The same way ASM spfile is read to startup the asm instance.

5.     Ohasd’s orarootagent process starts up the crsd process. the crsd process reads OCR file and completes the cluster initialization.

6.     The crsd process spawns the oraagent and orarootagent process.

 

The CRSD orarootagent then further spawns the below resources:
   1. Network resource : for the public network
   2. SCAN VIPs
   3. Node VIPs : VIPs for each node
   4. ACFS Registry
   5. GNS VIP : VIP for GNS if you use the GNS option

and the CRSD oraagent spawns the below resources:

   1. ASM Resources
   2. Diskgroups
   3. DB Resources
   4. SCAN Listener : Listener for SCAN listening on SCAN VIP
   5. Listener : Node Listener listening on the Node VIP
   6. Services : Database Services
   7. ONS
   8. GSD
   9. GNS