Monday, May 30, 2011

single instance to rac

Current Situation : Node1 : DB Size 100GB


Migrate it to RAC
---------------------------

Node 2
Shared Storage
Network Check has to be verified across



Install Oracle Clusterware on New Node ( Node 2 ).
Install Oracle Software for ASM as well as Database Software.

Backup and Restore database shared files to Shared Location.

Downtime

Migrate the Node 1 database to Node 2, Server migration no
DB Migration.

Script to Execute : catclust.sql / rdbms/admin

Install Clusterware on Node 1 , Install database software, addRootNode.sh

Install ASM Home

Node 1 : Pfile can mount and open shared location database

srvctl add database -d


Node 2 wants to add Node 1 in the cluster

Delete instance from Node1 for Node 2 in RAC 11g R1

Step 1 : Delete instance from Node1 for Node 2
using dbca on VNC Viewer
Step 2 : Clean UP your ASM Instance
srvctl stop asm -n eg8238
srvctl remove asm -n eg8238
remove the Pfile used by ASM
rm $ASM_HOME/dbs/*ASM*
change ASM home to your ASM Software home
remove the OFA of ASM
rm -Rf /u01/app/oracle/admin/+ASM
> /etc/oratab

step 3 : Clean Up Listener
shutdown listener before removal
use netca from Node2 and remove the Node Listener

step 4 : remove the Database Software by first updating the
local inventory for software removal

cd $ORACLE_HOME/oui/bin


./runInstaller
-updateNodeList
ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1
"CLUSTER_NODES=eg8238" -local

this updates local inventory for Removal of DAtabase Software
./runInstaller

and once this is done then run the script on all remaining nodes


login to remaining node and run

./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1
CLUSTER_NODES=eg8228



step 5 :
remove the ASM Software by first updating the
local inventory for software removal

ensure that ASM_HOME has a value by
echo $ASM_HOME

cd $ASM_HOME/oui/bin

./runInstaller
-updateNodeList
ORACLE_HOME=$ASM_HOME
"CLUSTER_NODES=eg8238" -local

this updates local inventory for Removal of DAtabase Software
./runInstaller

and once this is done then run the script on all remaining nodes


login to remaining node and run

./runInstaller -updateNodeList ORACLE_HOME=$ASM_HOME
CLUSTER_NODES=eg8228


step 6 :

Remove the Clusterware software
Remove ONS first
by

check the port by logging in $CRS_HOME/opmn/conf

$CRS_HOME/bin/racgons remove_config eg8238:6150

login through root user and run $CRS_HOME/install/rootdelete.sh

run it as root user from Node1
rootdeletenode.sh eg8238,2





step 7 : Remove software for CRS

cd $CRS_HOME/oui/bin

./runInstaller
-updateNodeList
ORACLE_HOME=$CRS_HOME
"CLUSTER_NODES=eg8238"
CRS=TRUE -local


./runInstaller


On remaining Node
./runInstaller -updateNodeList ORACLE_HOME=$CRS_HOME
CLUSTER_NODES=eg8228 CRS=TRUE

Add Rac node to a existing RAC in 11g R1

Node Addition

eg8228 node1
eg8238 node2



Step 1 : Check Cluvfy to identify the issue in Installation
from Node 1

cluvfy stage -pre crsinst -n eg8228,eg8238 -r 11gR1

Step 2 : From Node1 of GUI
run
$CRS_HOME/oui/bin/addNode.sh

run Root.sh and rootAddNode.sh as asked on the required node

verify crs_stat has the resource running on new node

Step 3 : Configure new ONS
racgons add_config eg8228:6251


step 4 : From Node 1
goto ASM_HOME/oui/bin
run addNode.sh

select the Node to execute and install run root.sh when required

step 5 : From Node 1

goto ORACLE_HOME/oui/bin

run addNode.sh

select the node to install

step 5 :
Run it on Node 2
netca to configure Listener on this node, use ASM Home

step 6 :
Run it on Node 2
use ASM _HOME
run dbca / configure automatic storage management

step 7 :
On Node 2
use ORACLE_HOME
run dbca / Instance MAnagement and add database instance