Installing 11g RAC on Linux VM: Install Oracle Database software and create RAC database

Previous:

You can start database installation from <stage>/database directory as follows. Make sure to use database owner user oradb.

[oradb@dbhost1 ~]$ cd /mnt/hgfs/setup/database/

[oradb@dbhost1 database]$ ./runInstaller

Deselect the checkbox and click Next

Click Yes

Select “Create and configure a database”. We can also install database software and later create database using DBCA. But here we will choose to create database as well. Click Next

Select Server Class and click Next

Since it detects presence of clusterware, it will prompt you to select RAC database installation. Select RAC and also select all nodes. Click Next

Note: If this screen does not come then do not proceed, since it may not have detected cluster services running. You must get this screen for proper installation of RAC database.

It will test passwordless connectivity for oradb user on both nodes.

Select “Advanced install” and click Next

Click Next

Select Enterprise Edition. Click on Select Options.

Select whichever options you need. In production systems be careful to select only those options which you have licensed.

Select Software Location as /app/oracle/product/11.2.0/dbhome_1

This will be your ORACLE_HOME. Click Next

Select “General Purpose / Transactional Processing”. Click Next

Select appropriate name for your database. Click Next

Select “Enable Automatic Memory Management” and the amount of memory to be allocated. Click on second tab “Character sets”.

Select Unicode. This is best option if you are planning to use non-English characters in future since it supports all Unicode characters. Click Next

Click Next

Select File System and specify location as /u01/oradata

Click Next

Since this is demo database, select “Do not enable automated backups”. Click Next

For Demo database you can choose same password for all accounts. Click Next

Click Yes if you want to keep simple password.

Select dba and oinstall respectively. Click Next

Review the summary and save response file if required. Click Finish once response file is saved.

[oracle@dbhost1 database]$ You can find the log of this install session at:

/app/oraInventory/logs/installActions<timestamp>.log

Next it will create the database as we had selected to create database as well. Please make sure that /app/oracle/cfgtoollogs directory has write permission for group members. If not, please execute following to avoid log file write errors.

 

chmod g+w /app/oracle/cfgtoollogs/

 

 

Once this completes, it will prompt you to run $ORACLE_HOME/root.sh file on both nodes. I have missed to take that last part screenshot so not including in this post. This script will run multiple scripts within it which change a few permissions, ownership and add entry for database in /etc/oratab file

 

 

Once installation is finished, you can make sure that the database is running on both nodes using following command.

 

[oradb@dbhost1 ~]$ srvctl status database -d rac11g

Instance rac11g1 is running on node dbhost1

Instance rac11g2 is running on node dbhost2

 

Also you can login to the database and check.

 

[oradb@dbhost1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on **

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production

With the Partitioning, Real Application Clusters, OLAP, Data Mining

and Real Application Testing options

SQL> select inst_id, instance_name, status from gv$instance;

 

INST_ID INSTANCE_NAME STATUS

———- —————- ————

1 rac11g1 OPEN

2 rac11g2 OPEN

 

 

Shutdown/Startup steps for 11gR2 RAC on VMs

 

1. Shutdown steps

 

Since all cluster resources including database, listeners etc can be controlled by Oracle Clusterware Control script (crsctl), we can directly shutdown all services using this crsctl or crs_stop script.

 

oracle@dbhost1 ~]# /app/oracle/11.2.0/grid/bin/crs_stop –all

 

if you want to only shutdown database or listener, use following commands.

 

oracle@dbhost1 ~]# /app/oracle/11.2.0/grid/bin/srvctl stop listener

 

oracle@dbhost1 ~]# /app/oracle/11.2.0/grid/bin/srvctl stop database –d rac11g

 

2. Startup steps

 

You can startup all cluster resources including database and listener using crsctl or crs_start script.

 

Before this, make sure that you have booted Node 1 first and then Node 2. Give it some time to bring up clusterware related services so that they both can communicate to the CRS daemon.

 

Run following command to verify cluster is up.

 

root@dbhost2 ~]# /app/11.2.0/grid/bin/srvctl status nodeapps

VIP dbhost1-vip is enabled

VIP dbhost1-vip is running on node: dbhost1

VIP dbhost2-vip is enabled

VIP dbhost2-vip is running on node: dbhost2

Network is enabled

Network is running on node: dbhost1

Network is running on node: dbhost2

GSD is disabled

GSD is not running on node: dbhost1

GSD is not running on node: dbhost2

ONS is enabled

ONS daemon is running on node: dbhost1

ONS daemon is running on node: dbhost2

eONS is enabled

eONS daemon is running on node: dbhost1

eONS daemon is running on node: dbhost2

 

if everything is fine the use following commands to startup cluster resources including database and listener.

 

oracle@dbhost1 ~]# /app/11.2.0/grid/bin/crs_start -all

 

if you want to only start database or listener, use following commands.

 

oracle@dbhost1 ~]# /app/11.2.0/grid/bin/srvctl start listener

 

oracle@dbhost1 ~]# /app/11.2.0/grid/bin/srvctl start database -d rac11g

 

This concludes the installation of Oracle 11gR2 RAC (11.2.0.1) on 2 VM nodes. You should upgrade this to 11.2.0.3 to leverage best functionalities and bug fixes in 11gR2. I will cover this in another post.

 

If your installation had failed during database creation/configuration part, no need to restart the installation. You can manually create database using Database Configuration Assistant (DBCA). This will automatically detect presence of Oracle clusterware and register database into RAC.

 

Thanks for reading this article and feel free to ask any questions or help others who have questions in the comments section.

 

Happy Learning !

 

Tushar

 

 

Installing 11g Release 2 Real Application Clusters (11gR2 RAC) on Linux x86-64 Virtual Machine (VM) – Steps

1. Create Virtual Machine and install 64 bit Linux (generic step from previous post, not specific to this guide)

2. Add additional virtual Ethernet card and perform prerequisites in Linux

3. Copy/clone this virtual machine to create second node and modify host details

4. Setup shared file system and other pre-requisites

5. Install Oracle Clusterware

6. Install Oracle Database software and create RAC database

 

Mar 10th, 2013 | Posted by Tushar Thakker | Filed under Uncategorized

Installing 11g RAC on Linux VM: Install Oracle Clusterware

Previous: Copy/clone this virtual machine to create second node and modify host details

The clusterware or database installation needs to be started only on one node since it will propagate the files to remote node automatically during the installation.

If you have the setup files on your host machine, you can share the setup folder to the VM using VMWare or Oracle VirtualBox shared folder option.

Following screen shows how to share a folder to VM using VMWare. This can be done even when the VM is online.

The files which you share using above option will be available by default at /mnt/hgfs/ directory under Linux.

Now let us start the Oracle Clusterware installation.

Login with oracle user (grid owner)

[oracle@dbhost1 ~]$ cd /mnt/hgfs/setup/grid/

Start the installation using ./runInstaller script

[oracle@dbhost1 grid]$ ./runInstaller

 

 

 

Select “Install and Configure Grid Infrastructure for a Cluster” and click Next

 

 

Select “Advanced Installation” and click Next

 

 

Click Next

 

 

Enter details as follows and click Next. You can change the values as you want but make sure to use same values as you entered if required in other screens.

 

Cluster Name: dbhost-cluster

SCAN Name: dbhost-scan.paramlabs.com

SCAN Port: 1521

 

 

It will validate the name entered for SCAN.

 


Since we have started installation on node1 and it yet does not recognize node2, it will only show 1 node here. Now we need to manually add another node in the cluster. Click Add

 

 

Enter hostname and VIP name for second node. Click Ok

 

 

Now both nodes will appear on the screen. Make sure to select both values and then click Next

 

 

It will do various tests including ssh connectivity, node readiness, user equivalence and check for existing public/private interfaces on the hosts.

 

 

 

 

 

It will detect eth0 as public and eth1 as private. This is exactly what we want. If not detected automatically, set as above and click Next

 

 

Select Shared File system and click Next

 

 

 

Important Note: Since having normal redundancy increases the load on VM to make sure NFS communicates and keeps all these files in sync all the time, we might face issues during normal run of the Oracle RAC.

 

So please select EXTERNAL REDUNDANCY which means that you are going to mirror this disk using external methods. Though we are not using any external mirroring here but still for the non-production and learning purpose we can take this risk.

 

Enter following value in External Redundancy box. Click Next

/u01/storage/ocr

 

Important Note: Since having normal redundancy increases the load on VM to make sure NFS communicates and keeps all these files in sync all the time, we might face issues during normal run of the Oracle RAC.

 

So please select EXTERNAL REDUNDANCY which means that you are going to mirror this disk using external methods. Though we are not using any external mirroring here but still for the non-production and learning purpose we can take this risk.

 

Enter following value in External Redundancy box. Click Next

/u01/storage/vdsk

 

 

Select Do not use IPMI and click Next

 

 

We have selected dba for all the above groups, you can choose different if you wish to. Click Next

 

 

If you chose dba as all above groups, you might see above message box, click Yes

 

 

Enter following values and click Next
Oracle Base: /app/oracle

Software Location: /app/11.2.0/grid

 

 

 

Enter /app/oraInventory for the Inventory location. Click Next

 

 

 

You might see above failed pre-requisites. We have specifically not applied the above pre-reqs just to show you that now Oracle will generate a script to fix all required pre-reqs in this screen where the value in column “Fixable” is Yes

 

Regarding physical memory and swap, we can ignore these. Click on “Fix and Check Again” to generate the fix script

 

 

It will show above screen with location of runfixup.sh script which you need to run as root
on both nodes.

 

[root@dbhost1 ~]# /tmp/CVU_11.2.0.1.0_grid/runfixup.sh

Response file being used is :/tmp/CVU_11.2.0.1.0_grid/fixup.response

Enable file being used is :/tmp/CVU_11.2.0.1.0_grid/fixup.enable

Log file location: /tmp/CVU_11.2.0.1.0_grid/orarun.log

uid=54322(grid) gid=54322(dba) groups=54322(dba),54321(oinstall)

[root@dbhost2 ~]# /tmp/CVU_11.2.0.1.0_grid/runfixup.sh

Response file being used is :/tmp/CVU_11.2.0.1.0_grid/fixup.response

Enable file being used is :/tmp/CVU_11.2.0.1.0_grid/fixup.enable

Log file location: /tmp/CVU_11.2.0.1.0_grid/orarun.log

uid=54322(grid) gid=54322(dba) groups=54322(dba),54321(oinstall)

 

 

Now it will only show above 2 memory related errors. We can ignore them. Check Ignore All

 

 

Click Next

 

 

Review the summary and click Finish to begin installation. If you wish you can save the response file as follows before clicking on finish.

 

 

 

After installation is finished on node 1, it will propagate the files on node 2. After which it will need to run some scripts as root user.

 

 

Run first script on both nodes and then second script on both nodes.

 

[root@dbhost1 ~]# /app/oraInventory/orainstRoot.sh

Changing permissions of /app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

 

Changing groupname of /app/oraInventory to dba.

The execution of the script is complete.

 

[root@dbhost2 ~]# /app/oraInventory/orainstRoot.sh

Changing permissions of /app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

 

Changing groupname of /app/oraInventory to dba.

The execution of the script is complete.

 

 

[root@dbhost1 ~]# /app/11.2.0/grid/root.sh

Running Oracle 11g root.sh script…

 

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /app/11.2.0/grid

 

Enter the full pathname of the local bin directory: [/usr/local/bin]:

Copying dbhome to /usr/local/bin …

Copying oraenv to /usr/local/bin …

Copying coraenv to /usr/local/bin …

 

 

Creating /etc/oratab file…

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

2013-02-19 12:35:15: Parsing the host name

2013-02-19 12:35:15: Checking for super user privileges

2013-02-19 12:35:15: User has super user privileges

Using configuration parameter file: /app/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

LOCAL ADD MODE

Creating OCR keys for user ‘root’, privgrp ‘root’..

Operation successful.

root wallet

root wallet cert

root cert export

peer wallet

profile reader wallet

pa wallet

peer wallet keys

pa wallet keys

peer cert request

pa cert request

peer cert

pa cert

peer root cert TP

profile reader root cert TP

pa root cert TP

peer pa cert TP

pa peer cert TP

profile reader pa cert TP

profile reader peer cert TP

peer user cert

pa user cert

Adding daemon to inittab

CRS-4123: Oracle High Availability Services has been started.

ohasd is starting

acfsroot: ACFS-9301: ADVM/ACFS installation can not proceed:

 

acfsroot: ACFS-9302: No installation files found at /app/11.2.0/grid/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5uek-x86_64/bin.

 

CRS-2672: Attempting to start ‘ora.gipcd’ on ‘dbhost1’

CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘dbhost1’

CRS-2676: Start of ‘ora.gipcd’ on ‘dbhost1’ succeeded

CRS-2676: Start of ‘ora.mdnsd’ on ‘dbhost1’ succeeded

CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘dbhost1’

CRS-2676: Start of ‘ora.gpnpd’ on ‘dbhost1’ succeeded

CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘dbhost1’

CRS-2676: Start of ‘ora.cssdmonitor’ on ‘dbhost1’ succeeded

CRS-2672: Attempting to start ‘ora.cssd’ on ‘dbhost1’

CRS-2672: Attempting to start ‘ora.diskmon’ on ‘dbhost1’

CRS-2676: Start of ‘ora.diskmon’ on ‘dbhost1’ succeeded

CRS-2676: Start of ‘ora.cssd’ on ‘dbhost1’ succeeded

CRS-2672: Attempting to start ‘ora.ctssd’ on ‘dbhost1’

CRS-2676: Start of ‘ora.ctssd’ on ‘dbhost1’ succeeded

clscfg: -install mode specified

Successfully accumulated necessary OCR keys.

Creating OCR keys for user ‘root’, privgrp ‘root’..

Operation successful.

CRS-2672: Attempting to start ‘ora.crsd’ on ‘dbhost1’

CRS-2676: Start of ‘ora.crsd’ on ‘dbhost1’ succeeded

Now formatting voting disk: /u01/cluster/vdsk1.

Now formatting voting disk: /u02/cluster/vdsk2.

Now formatting voting disk: /u03/cluster/vdsk3.

CRS-4603: Successful addition of voting disk /u01/cluster/vdsk1.

CRS-4603: Successful addition of voting disk /u02/cluster/vdsk2.

CRS-4603: Successful addition of voting disk /u03/cluster/vdsk3.

## STATE File Universal Id File Name Disk group

— —– —————– ——— ———

1. ONLINE 91ce08c1a7254ff5bfe2d1125bafd956 (/u01/cluster/vdsk1) []

2. ONLINE 44f4d1a582e54ffdbf600efd4fb30cff (/u02/cluster/vdsk2) []

3. ONLINE 60b10b42b1334f2fbf753c9a4a4e85d2 (/u03/cluster/vdsk3) []

Located 3 voting disk(s).

CRS-2673: Attempting to stop ‘ora.crsd’ on ‘dbhost1’

CRS-2677: Stop of ‘ora.crsd’ on ‘dbhost1’ succeeded

CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘dbhost1’

CRS-2677: Stop of ‘ora.ctssd’ on ‘dbhost1’ succeeded

CRS-2673: Attempting to stop ‘ora.cssdmonitor’ on ‘dbhost1’

CRS-2677: Stop of ‘ora.cssdmonitor’ on ‘dbhost1’ succeeded

CRS-2673: Attempting to stop ‘ora.cssd’ on ‘dbhost1’

CRS-2677: Stop of ‘ora.cssd’ on ‘dbhost1’ succeeded

CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘dbhost1’

CRS-2677: Stop of ‘ora.gpnpd’ on ‘dbhost1’ succeeded

CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘dbhost1’

CRS-2677: Stop of ‘ora.gipcd’ on ‘dbhost1’ succeeded

CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘dbhost1’

CRS-2677: Stop of ‘ora.mdnsd’ on ‘dbhost1’ succeeded

CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘dbhost1’

CRS-2676: Start of ‘ora.mdnsd’ on ‘dbhost1’ succeeded

CRS-2672: Attempting to start ‘ora.gipcd’ on ‘dbhost1’

CRS-2676: Start of ‘ora.gipcd’ on ‘dbhost1’ succeeded

CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘dbhost1’

CRS-2676: Start of ‘ora.gpnpd’ on ‘dbhost1’ succeeded

 

CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘dbhost1’

CRS-2676: Start of ‘ora.cssdmonitor’ on ‘dbhost1’ succeeded

CRS-2672: Attempting to start ‘ora.cssd’ on ‘dbhost1’

CRS-2672: Attempting to start ‘ora.diskmon’ on ‘dbhost1’

CRS-2676: Start of ‘ora.diskmon’ on ‘dbhost1’ succeeded

CRS-2676: Start of ‘ora.cssd’ on ‘dbhost1’ succeeded

CRS-2672: Attempting to start ‘ora.ctssd’ on ‘dbhost1’

CRS-2676: Start of ‘ora.ctssd’ on ‘dbhost1’ succeeded

CRS-2672: Attempting to start ‘ora.crsd’ on ‘dbhost1’

CRS-2676: Start of ‘ora.crsd’ on ‘dbhost1’ succeeded

CRS-2672: Attempting to start ‘ora.evmd’ on ‘dbhost1’

CRS-2676: Start of ‘ora.evmd’ on ‘dbhost1’ succeeded

 

 

 

dbhost1 2013/02/19 12:42:00 /app/11.2.0/grid/cdata/dbhost1/backup_20130219_124200.olr

Preparing packages for installation…

cvuqdisk-1.0.7-1

Configure Oracle Grid Infrastructure for a Cluster … succeeded

Updating inventory properties for clusterware

Starting Oracle Universal Installer…

 

Checking swap space: must be greater than 500 MB. Actual 8178 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /app/oraInventory

‘UpdateNodeList’ was successful.

 

====

 

[root@dbhost2 ~]# /app/11.2.0/grid/root.sh

Running Oracle 11g root.sh script…

 

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /app/11.2.0/grid

 

Enter the full pathname of the local bin directory: [/usr/local/bin]:

Copying dbhome to /usr/local/bin …

Copying oraenv to /usr/local/bin …

Copying coraenv to /usr/local/bin …

 

 

Creating /etc/oratab file…

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

2013-02-19 12:44:08: Parsing the host name

2013-02-19 12:44:08: Checking for super user privileges

2013-02-19 12:44:08: User has super user privileges

Using configuration parameter file: /app/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

LOCAL ADD MODE

Creating OCR keys for user ‘root’, privgrp ‘root’..

Operation successful.

Adding daemon to inittab

CRS-4123: Oracle High Availability Services has been started.

ohasd is starting

acfsroot: ACFS-9301: ADVM/ACFS installation can not proceed:

 

acfsroot: ACFS-9302: No installation files found at /app/11.2.0/grid/install/usm/EL5/x86_64/2.6.18-8/2.6.18-8.el5uek-x86_64/bin.

 

CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node dbhost1, number 1, and is terminating

CRS-2673: Attempting to stop ‘ora.cssdmonitor’ on ‘dbhost2’

CRS-2677: Stop of ‘ora.cssdmonitor’ on ‘dbhost2’ succeeded

An active cluster was found during exclusive startup, restarting to join the cluster

CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘dbhost2’

CRS-2676: Start of ‘ora.mdnsd’ on ‘dbhost2’ succeeded

CRS-2672: Attempting to start ‘ora.gipcd’ on ‘dbhost2’

CRS-2676: Start of ‘ora.gipcd’ on ‘dbhost2’ succeeded

CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘dbhost2’

CRS-2676: Start of ‘ora.gpnpd’ on ‘dbhost2’ succeeded

CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘dbhost2’

CRS-2676: Start of ‘ora.cssdmonitor’ on ‘dbhost2’ succeeded

CRS-2672: Attempting to start ‘ora.cssd’ on ‘dbhost2’

CRS-2672: Attempting to start ‘ora.diskmon’ on ‘dbhost2’

CRS-2676: Start of ‘ora.diskmon’ on ‘dbhost2’ succeeded

CRS-2676: Start of ‘ora.cssd’ on ‘dbhost2’ succeeded

CRS-2672: Attempting to start ‘ora.ctssd’ on ‘dbhost2’

CRS-2676: Start of ‘ora.ctssd’ on ‘dbhost2’ succeeded

CRS-2672: Attempting to start ‘ora.crsd’ on ‘dbhost2’

CRS-2676: Start of ‘ora.crsd’ on ‘dbhost2’ succeeded

CRS-2672: Attempting to start ‘ora.evmd’ on ‘dbhost2’

CRS-2676: Start of ‘ora.evmd’ on ‘dbhost2’ succeeded

 

dbhost2 2013/02/19 12:48:37 /app/11.2.0/grid/cdata/dbhost2/backup_20130219_124837.olr

Preparing packages for installation…

cvuqdisk-1.0.7-1

Configure Oracle Grid Infrastructure for a Cluster … succeeded

Updating inventory properties for clusterware

Starting Oracle Universal Installer…

 

Checking swap space: must be greater than 500 MB. Actual 8188 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /app/oraInventory

‘UpdateNodeList’ was successful.

 

 

 

If you are not using DNS to resolve host names and only using /etc/hosts, you might see following failed step at “Oracle Cluster Verification Utility”. This is known issue and you can ignore it.

 

 

 

 

Click Skip and it will change the status as Ignored. Click Next

 

 

Click Close to finish the installation.

 

Verify that the cluster services are started properly on both nodes.

 

[root@dbhost2 ~]# /app/11.2.0/grid/bin/srvctl status nodeapps

VIP dbhost1-vip is enabled

VIP dbhost1-vip is running on node: dbhost1

VIP dbhost2-vip is enabled

VIP dbhost2-vip is running on node: dbhost2

Network is enabled

Network is running on node: dbhost1

Network is running on node: dbhost2

GSD is disabled

GSD is not running on node: dbhost1

GSD is not running on node: dbhost2

ONS is enabled

ONS daemon is running on node: dbhost1

ONS daemon is running on node: dbhost2

eONS is enabled

eONS daemon is running on node: dbhost1

eONS daemon is running on node: dbhost2

 

Next: Install Oracle Database software and create RAC database

 

Installing 11g Release 2 Real Application Clusters (11gR2 RAC) on Linux x86-64 Virtual Machine (VM) – Steps

1. Create Virtual Machine and install 64 bit Linux (generic step from previous post, not specific to this guide)

2. Add additional virtual Ethernet card and perform prerequisites in Linux

3. Copy/clone this virtual machine to create second node and modify host details

4. Setup shared file system and other pre-requisites

5. Install Oracle Clusterware

6. Install Oracle Database software and create RAC database

Mar 10th, 2013 | Posted by Tushar Thakker | Filed under Uncategorized

Installing 11g RAC on Linux VM: Setup shared file system and other pre-requisites

Previous: Copy/clone this virtual machine to create second node and modify host details

Now we need to setup shared file system for these nodes since having shared storage is a must for Oracle RAC

Since we are not using any external storage, NAS or SAN, we will host the shared file system on node 1 and share the same with node 2 using NFS. This is not recommended for production but since we are not using external storage for VMs this is the best way to achieve shared file system.

Note: Please do not use VMWare’s shared folders option since it is not cluster-aware and cannot make sure that read/writes from multiple hosts are handled properly. This can cause cluster panic so use NFS only for this VM purpose.

Since we are not using shared storage for Virtual Machines, we are sharing disks via NFS from one node to another. Here is how the VMs can be represented.

11gR2-VM-visio

Login to Node 1 using root user. Please note that these steps are ONLY to be done on Node 1, not on node 2.

Let us first create the directories which will host the shared data.

[root@dbhost1 ~]# mkdir /shared_1

[root@dbhost1 ~]# mkdir /shared_2

[root@dbhost1 ~]# mkdir /shared_3

[root@dbhost1 ~]# chown oracle:dba /shared_1

[root@dbhost1 ~]# chown oracle:dba /shared_2

[root@dbhost1 ~]# chown oracle:dba /shared_3

[root@dbhost1 ~]# chmod g+w /shared_1

[root@dbhost1 ~]# chmod g+w /shared_2

[root@dbhost1 ~]# chmod g+w /shared_3

 

Now we need to enable these directories to be shared over NFS. Enter following details in /etc/exports file

[root@dbhost1 ~]# more /etc/exports

/shared_1   *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

/shared_2   *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

/shared_3   *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

 

Check if NFS service is running. If not we need to start it.

[root@ dbhost1~]# service nfs status

rpc.mountd is stopped

nfsd is stopped

rpc.rquotad is stopped

 

[root@ dbhost1~]# chkconfig nfs on

[root@ dbhost1~]# service nfs start

Starting NFS services:                                     [  OK  ]

Starting NFS quotas:                                       [  OK  ]

Starting NFS daemon:                                       [  OK  ]

Starting NFS mountd:                                       [  OK  ]

Stopping RPC idmapd:                                       [  OK  ]

Starting RPC idmapd:                                       [  OK  ]

 

These above steps have enabled the newly created shared directories for NFS mount.

Now following steps need to be done on both Node 1 and Node 2

 

“Append” following entries in /etc/fstab file

[root@dbhost1 ~]# tail -3 /etc/fstab

nfshost:/shared_1       /u01    nfs  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0  0 0
nfshost:/shared_2       /u02    nfs  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0  0 0
nfshost:/shared_3       /u03    nfs  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0  0 0

 

Let us mount each of these shared directories to the mount points created earlier. This is not required to be done from next restart since system will automatically mount them based on /etc/fstab entries.

 

[root@dbhost1 ~]# mount /u01

[root@dbhost1 ~]# mount /u02

[root@dbhost1 ~]# mount /u03

 

Confirm if these directories are mounted correctly.

[root@dbhost1 ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

283G  4.6G  264G   2% /

/dev/sda1              99M   41M   53M  44% /boot

tmpfs                 749M     0  749M   0% /dev/shm

.host:/               301G  248G   53G  83% /mnt/hgfs

nfshost:/shared_1     283G  4.6G  264G   2% /u01

nfshost:/shared_2     283G  4.6G  264G   2% /u02

nfshost:/shared_3     283G  4.6G  264G   2% /u03

 

As I said, next time when you restart, automatically these directories will be mounted,

 

IMPORTANT NOTE: Whenever you are restarting both servers, make sure that you let Node 1 start properly before starting Node 2. The reason being, the shared file system is hosted on Node 1 so if node 2 starts before node 1 has enabled NFS share during startup, the mount points will not come up on Node 2. This will especially be a problem once our clusterware and RAC database are installed.

 

 

Setting up user-equivalence on both nodes (Passwordless SSH setup)

Having passwordless ssh connectivity is a must for RAC installation. This is required for both nodes to communicate with each other and passing commands to each node during runtime and maintenance.

We will setup user-equivalence for both oracle and oradb users. Let us start with grid owner user oracle first.

 

Perform following on Node1 

Let’s first create known_hosts entries so that it does not prompt to add host next time. We will use all 4 combinations every time. i.e. dbhost1, dbhost1.paramlabs.com (fully qualified domain name), dbhost2, dbhost2.paramlabs.com

 

[root@ dbhost1~]# su – oracle

[oracle@ dbhost1~]$ ssh oracle@dbhost2

The authenticity of host ‘dbhost2 (192.168.112.102)’ can’t be established.

RSA key fingerprint is af:5f:9d:92:e3:9c:4b:f0:62:15:92:16:00:b3:a2:c2.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost2,192.168.112.102’ (RSA) to the list of known hosts.

oracle@dbhost2’s password:

 

[oracle@ dbhost1~]$ ssh oracle@dbhost1

The authenticity of host ‘dbhost1 (192.168.112.101)’ can’t be established.

RSA key fingerprint is af:5f:9d:92:e3:9c:4b:f0:62:15:92:16:00:b3:a2:c2.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost1,192.168.112.101’ (RSA) to the list of known hosts.

oracle@dbhost1’s password:

 

[oracle@ dbhost1~]$ ssh oracle@dbhost1.paramlabs.com

Are you sure you want to continue connecting (yes/no)? yes

 

[oracle@ dbhost1~]$ ssh oracle@dbhost2.paramlabs.com

Are you sure you want to continue connecting (yes/no)? yes

 

Now let us generate public key for dbhost1. Accept defaults and press enter whenever prompted.

 

[oracle@ dbhost1~]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oracle/.ssh/id_dsa.

Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.

The key fingerprint is:

da:dd:34:90:3a:cb:01:b8:42:4d:14:90:4e:7d:4e:f1 oracle@ dbhost1.paramlabs.com

 

This will generate following files in <home directory>/.ssh directory

 

[oracle@ dbhost1~]$ ls -ltr /home/oracle/.ssh/

total 12

-rw-r–r– 1 oracle dba  618 Feb 17 13:10 id_dsa.pub

-rw——- 1 oracle dba  668 Feb 17 13:10 id_dsa

-rw-r–r– 1 oracle dba 1616 Feb 17 13:11 known_hosts

 

Now we need to copy this public key to second host in order to authorize it to connect to this host using this (oracle) user without password. We will save this public key in a file named authorized_keys on node 2.

 

[oracle@ dbhost1~]$ scp /home/oracle/.ssh/id_dsa.pub oracle@dbhost2:/home/oracle/.ssh/authorized_keys

oracle@dbhost2’s password:

id_dsa.pub                                                                                                 100%  618     0.6KB/s   00:00

 

Also we will append the public key for dbhost1 to itself as well so that it can do ssh to samebox without password (2 of the 4 combinations)

 

[oracle@ dbhost1~]$ cat /home/oracle/.ssh/id_dsa.pub >> /home/oracle/.ssh/authorized_keys

 

Now do the similar steps on Node 2 (dbhost2)

 

[oracle@ dbhost2~]$ ssh oracle@dbhost1

The authenticity of host ‘dbhost1 (192.168.112.101)’ can’t be established.

RSA key fingerprint is af:5f:9d:92:e3:9c:4b:f0:62:15:92:16:00:b3:a2:c2.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost1,192.168.112.101’ (RSA) to the list of known hosts.

oracle@dbhost1’s password:

[oracle@ dbhost2~]$ ssh oracle@dbhost1.paramlabs.com

Are you sure you want to continue connecting (yes/no)? yes

[oracle@ dbhost2~]$ ssh oracle@dbhost2

Are you sure you want to continue connecting (yes/no)? yes

[oracle@ dbhost2~]$ ssh oracle@dbhost2.paramlabs.com

Are you sure you want to continue connecting (yes/no)? yes

 

Generate the public key on dbhost2.

 

[oracle@ dbhost2~]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oracle/.ssh/id_dsa.

Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.

The key fingerprint is:

04:bb:86:9d:b4:e6:33:28:62:e4:fa:ff:02:ba:8c:ea oracle@ dbhost2.paramlabs.com

 

[oracle@ dbhost2~]$ ls -ltr /home/oracle/.ssh/

total 16

-rw-r–r– 1 oracle dba 1616 Feb 17 13:10 known_hosts

-rw-r–r– 1 oracle dba  618 Feb 17 13:13 authorized_keys

-rw-r–r– 1 oracle dba  618 Feb 17 13:13 id_dsa.pub

-rw——- 1 oracle dba  672 Feb 17 13:13 id_dsa

 

[oracle@ dbhost2~]$ scp /home/oracle/.ssh/id_dsa.pub oracle@dbhost1:/home/oracle/.ssh/authorized_keys

oracle@dbhost1’s password:

id_dsa.pub                                                                                                 100%  618     0.6KB/s   00:00

 

[oracle@ dbhost2~]$ cat /home/oracle/.ssh/id_dsa.pub >> /home/oracle/.ssh/authorized_keys

 

Now let us test the passwordless connectivity between these 2 nodes using Oracle user. 

On Node 1

[oracle@ dbhost1~]$ ssh oracle@dbhost1

[oracle@ dbhost1~]$ exit

 

As you can see it did not prompt for password. Check the same for all 4 combinations. Make sure to exit once each ssh is tested to return back to original shell.

 

[oracle@ dbhost1~]$ ssh oracle@dbhost2

Last login: Sun Feb 17 13:14:23 2013 from dbhost1.paramlabs.com

[oracle@ dbhost1~]$ exit

[oracle@ dbhost1~]$ ssh oracle@dbhost1.paramlabs.com

Last login: Sun Feb 17 13:15:59 2013 from dbhost1.paramlabs.com

[oracle@ dbhost1~]$ exit

[oracle@ dbhost1~]$ ssh oracle@dbhost2.paramlabs.com

Last login: Sun Feb 17 13:16:03 2013 from dbhost1.paramlabs.com

[oracle@ dbhost1~]$ exit

 

On Node2

[oracle@ dbhost2~]$ ssh dbhost1

Last login: Sun Feb 17 13:16:12 2013 from dbhost1.paramlabs.com

[oracle@ dbhost2~]$ exit

[oracle@ dbhost2~]$ ssh dbhost2

Last login: Sun Feb 17 13:16:16 2013 from dbhost1.paramlabs.com

[oracle@ dbhost2~]$ exit

[oracle@ dbhost2~]$ ssh dbhost1.paramlabs.com

Last login: Sun Feb 17 13:17:12 2013 from dbhost2.paramlabs.com

[oracle@ dbhost2~]$ exit

[oracle@ dbhost2~]$ ssh dbhost2.paramlabs.com

Last login: Sun Feb 17 13:17:16 2013 from dbhost2.paramlabs.com

[oracle@ dbhost2~]$ exit

 

Now we have to exactly the same steps for database owner user oradb. Here I am skipping creation of known_hosts since anyway after first attempt to test ssh, it will create the entries.

 

[oradb@dbhost1 ~]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/oradb/.ssh/id_dsa):

Created directory ‘/home/oradb/.ssh’.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oradb/.ssh/id_dsa.

Your public key has been saved in /home/oradb/.ssh/id_dsa.pub.

The key fingerprint is:

6e:fb:1e:55:cc:41:b9:a4:d3:7c:26:cf:f9:39:8c:a7 oradb@dbhost1.paramlabs.com

 

[oradb@dbhost1 ~]$ cd /home/oradb/.ssh/

[oradb@dbhost1 .ssh]$ scp id_dsa.pub oradb@dbhost2:/home/oradb/.ssh/authorized_keys

The authenticity of host ‘dbhost2 (192.168.112.102)’ can’t be established.

RSA key fingerprint is af:5f:9d:92:e3:9c:4b:f0:62:15:92:16:00:b3:a2:c2.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost2,192.168.112.102’ (RSA) to the list of known hosts.

oradb@dbhost2’s password:

id_dsa.pub                                                                              100%  617     0.6KB/s   00:00

 

[oradb@dbhost1 .ssh]$ cat id_dsa.pub  >> authorized_keys

 

On Node 2

 

[oradb@dbhost2 ~]$ ssh-keygen -t dsa

[oradb@dbhost2 ~]$ cd /home/oradb/.ssh/

[oradb@dbhost2 .ssh]$  scp id_dsa.pub oradb@dbhost1:/home/oradb/.ssh/authorized_keys

Are you sure you want to continue connecting (yes/no)? yes

oradb@dbhost1’s password:

id_dsa.pub                                                                              100%  617     0.6KB/s   00:00

 

[oradb@dbhost2 .ssh]$ cat id_dsa.pub  >> authorized_keys

 

Now let us test the passwordless ssh for oradb user on both nodes.

 

On Node 1

[oradb@dbhost1 .ssh]$ ssh oradb@dbhost1

Are you sure you want to continue connecting (yes/no)? yes

[oradb@dbhost1 ~]$ exit

[oradb@dbhost1 .ssh]$ ssh oradb@dbhost2

[oradb@dbhost2 ~]$ exit

[oradb@dbhost1 .ssh]$ ssh oradb@dbhost1.paramlabs.com

Are you sure you want to continue connecting (yes/no)? yes

[oradb@dbhost1 ~]$ exit

[oradb@dbhost1 .ssh]$ ssh oradb@dbhost2.paramlabs.com

Are you sure you want to continue connecting (yes/no)? yes

[oradb@dbhost2 ~]$ exit

 

On Node 2

[oradb@dbhost2 .ssh]$ ssh oradb@dbhost1

Last login: Tue Feb 19 13:26:27 2013 from dbhost1.paramlabs.com

[oradb@dbhost1 ~]$ exit

[oradb@dbhost2 .ssh]$ ssh oradb@dbhost2

Are you sure you want to continue connecting (yes/no)? yes

[oradb@dbhost2 ~]$ exit

[oradb@dbhost2 .ssh]$ ssh oradb@dbhost1.paramlabs.com

Are you sure you want to continue connecting (yes/no)? yes

[oradb@dbhost1 ~]$ exit

[oradb@dbhost2 .ssh]$ ssh oradb@dbhost2.paramlabs.com

Are you sure you want to continue connecting (yes/no)? yes

..

[oradb@dbhost2 ~]$ exit

 

Next: Install Oracle Clusterware

 

 

Installing 11g Release 2 Real Application Clusters (11gR2 RAC) on Linux x86-64 Virtual Machine (VM) – Steps

1. Create Virtual Machine and install 64 bit Linux (generic step from previous post, not specific to this guide)

2. Add additional virtual Ethernet card and perform prerequisites in Linux

3. Copy/clone this virtual machine to create second node and modify host details

4. Setup shared file system and other pre-requisites

5. Install Oracle Clusterware

6. Install Oracle Database software and create RAC database

 

Mar 10th, 2013 | Posted by Tushar Thakker | Filed under Uncategorized