Installing 12c RAC on Linux VM: Install Oracle Grid Infrastructure

Previous: Setup shared file system and other pre-requisites

The grid infrastructure (clusterware) or database installation needs to be started only on one node since it will propagate the files to remote node automatically during the installation.

If you have the setup files on your host machine, you can share the setup folder to the VM using VMWare or Oracle VirtualBox shared folder option.

Following screen shows how to share a folder to VM using VMWare. This can be done even when the VM is online.

The files which you share using above option will be available by default at /mnt/hgfs/ directory under Linux.

Now let us start the Oracle Clusterware installation.

Before proceeding, please make following change if you are not using DNS if you did not do this already in previous step.

[root@dbhost1 ~]# mv /etc/resolv.conf /etc/resolv.conf_bak

[root@dbhost2 ~]# mv /etc/resolv.conf /etc/resolv.conf_bak

Login with grid user (grid owner)

[grid@dbhost1 ~]$ cd /mnt/hgfs/Setup/grid/

Start the installation using ./runInstaller script

[grid@dbhost1 grid]$ ./runInstaller

 

Select “Skip software updates” and click Next

 

 

Select “Install and Configure Oracle Grid Infrastructure for a Cluster” and click Next

 

You can see an interesting new feature here called “Flex cluster“. This allows to flexibly use set of servers assigned in a flex server to select roles like database, application etc. We will discuss about this in details later.

 

Select “Configure a Standard cluster” and click Next

 

Select “Advanced installation” and click Next

 

Click Next

 

Enter details as follows and click Next. You can change the values as you want but make sure to use same values as you entered if required in other screens.

Cluster Name: dbhost-cluster

SCAN Name: dbhost-scan.paramlabs.com

SCAN Port: 1521

Deselect GNS. Click Next

You will see only first host here. We need to add second node in the cluster. Click Add

 

Enter details of second host and virtual hostname (dbhost2-vip.paramlabs.com) and click OK

 

Select both nodes. Click “SSH connectivity“. And select Test. The test should be successful.

Click Next

 

There is a new concept of Oracle Flex ASM storage in 12c. If you are planning to use it then you can select “ASM and Private” for second interface or you can also assign a different interface later.

We will use only shared file system so we will select only “Private” here.

 

Select Private for eth1 and leave public for eth0 (automatically detected). Click Next

 

 

Oracle recommends that you create a Grid Infrastructure Management Repository.

This repository is an optional component, but if you do not select this feature during installation, then you lose access to Oracle Database Quality of Service management, Memory Guard, and Cluster Health Monitor. You cannot enable these features after installation except by reinstalling Oracle Grid Infrastructure.

We will discuss this feature in another post. Let us select “Yes” here. You can choose No as well if you are not planning to do advanced activities on cluster as mentioned above. Click Next

 

Select “Use Shared File System” and click Next

 

Important Note: Since having normal redundancy increases the load on VM to make sure NFS communicates and keeps all these files in sync all the time, we might face issues during normal run of the Oracle RAC.

So please select EXTERNAL REDUNDANCY which means that you are going to mirror this disk using external methods. Though we are not using any external mirroring here but still for the non-production and learning purpose we can take this risk.

Enter following value in External Redundancy box. Click Next

/u02/storage/ocr

Important Note: Since having normal redundancy increases the load on VM to make sure NFS communicates and keeps all these files in sync all the time, we might face issues during normal run of the Oracle RAC.

So please select EXTERNAL REDUNDANCY which means that you are going to mirror this disk using external methods. Though we are not using any external mirroring here but still for the non-production and learning purpose we can take this risk.

Enter following value in External Redundancy box. Click Next

/u03/storage/vdsk

Select “Do not use IPMI” and click Next

 

We have selected dba for all the above groups, you can choose different if you wish to. Click Next

 

If you chose dba as all above groups, you might see above message box, click Yes

 

Enter following values and click Next
Oracle Base: /app/grid

Software Location: /app/12.1.0/grid

Click Next

 

Click Next

 

In 12c the installer itself can run various root user scripts on its own if we provide root password here. Select the checkbox and enter root password. You can skip this as well but in that case you will be prompted to run scripts manually. Click Next

 

 

Click on zeroconf check and select “more details” from bottom pan.

We can fix is as follows.

[root@dbhost1 ~]# cp -pr /etc/sysconfig/network /etc/sysconfig/network.bak

Update the file as follows.

[root@dbhost1 ~]# more /etc/sysconfig/network

# Added NOZEROCONF as pre-requisite for 12c

NOZEROCONF=yes

NETWORKING=yes

NETWORKING_IPV6=yes

HOSTNAME=dbhost1.paramlabs.com

 

[root@dbhost2 ~]# cp -pr /etc/sysconfig/network /etc/sysconfig/network.bak

Update the file as follows.

[root@dbhost2 ~]# more /etc/sysconfig/network

# Added NOZEROCONF as pre-requisite for 12c

NOZEROCONF=yes

NETWORKING=yes

NETWORKING_IPV6=yes

HOSTNAME=dbhost2.paramlabs.com

 

You can see that some of the dependencies installer can fix on its own. Click “Fix & Check Again” to see what all it can fix automatically.

You will see above screen if you have already provided root password earlier. Click OK to let it run the fix scripts and start validation again.

 

 

You can see result of fixup script on second tab.

 

We can ignore this since we know 3 GB is good enough for this J

Click “Ignore all” and click Next

 

Click Yes

 

Save Response file if required. Click Next

 

 

 

At this point it will prompt that installer will run root scripts automatically using previous password. Click Yes

 

You can click details of current activity by clicking on “Details” button.

 

 

At this point installer will error out with following message.

 

No need to panic. This is expected since we did not use DNS and installer tries to resolve the SCAN name using DNS. You can confirm this in the log file.

===================

INFO: PRVG-1101 : SCAN name “dbhost-scan.paramlabs.com” failed to resolve

INFO: ERROR:

INFO: PRVF-4657 : Name resolution setup check for “dbhost-scan.paramlabs.com” (IP address: 192.168.1.125) failed

INFO: ERROR:

INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name “dbhost-scan.paramlabs.com”

INFO: Checking SCAN IP addresses…

INFO: Check of SCAN IP addresses passed

INFO: Verification of SCAN VIP and Listener setup failed

==================

Click Skip

 

Click Next

 

Click Yes

 

Click Close to finish the installation

 

Verify that the cluster services are started properly on both nodes.

 

[grid@dbhost1 logs]$ /app/12.1.0/grid/bin/srvctl status nodeapps

VIP dbhost1-vip.paramlabs.com is enabled

VIP dbhost1-vip.paramlabs.com is running on node: dbhost1

VIP dbhost2-vip.paramlabs.com is enabled

VIP dbhost2-vip.paramlabs.com is running on node: dbhost2

Network is enabled

Network is running on node: dbhost1

Network is running on node: dbhost2

ONS is enabled

ONS daemon is running on node: dbhost1

ONS daemon is running on node: dbhost2

This concludes the 12c Grid infrastructure installation. Next step is installation of database software and creating RAC database.

Next: Install Oracle Database software and create RAC database

Oracle 12c (12.1) RAC (Real Applications Cluster) installation on Linux Virtual Machines – Step by step guide

1. Create Virtual Machine and install 64 bit Linux
2. Add additional virtual Ethernet card and perform prerequisites in Linux
3. Copy/clone this virtual machine to create second node and modify host details
4. Setup shared file system and other pre-requisites
5. Install Oracle Grid Infrastructure
6. Install Oracle Database software and create RAC database

Jul 9th, 2013 | Posted by Tushar Thakker | Filed under Linux/Unix/Solaris, Oracle, Oracle Database, Oracle DBA, Oracle RAC, VirtualBox, Virtualization

Installing 12c RAC on Linux VM: Add additional virtual Ethernet card and perform prerequisites in Linux

Now since we have installed basic Linux VM as per our previous guide, we need to add additional Ethernet card for Oracle clusterware private interconnect IP.

Make sure that the OS is shutdown cleanly and VM shows Powered off status.

In VMWare, select the VM, right click and select Settings. Same option is available in Oracle VirtualBox as well.

 You will see above screen. Make sure that the memory is set to 3GB or 2.5GB. Click on “Add” to add new hardware

 Select “Network Adapter” from the list and click Next

Select “Host-only” network and click Finish

The settings page should now look as above. Click Ok to close the window.

Start up the Linux operating system in VM and login with root account.

Click on System-> Administration-> Network to open network configuration screen

Click Edit to confirm the IP address.

Optionally you can disable MAC address binding since you will clone this VM to another node.

 You will see one interface eth0 here. Now we will add another interface in Linux for the new virtual network card. Click on New

Specify another IP address in different range. Our previous eth0 IP is 192.168.1.121 so we have selected different subnet 192.168.2.121 for eth1

Do not specify any default gateway address here since these servers are generally connected to each other physically so in VM as well we will do the same.

Make sure that the hostname is appearing correctly under DNS tab. You can change hostname here if required.

You will see that still new interface shows as Inactive. Select the interface and click “Activate

Now both will show up as “Active”. Make sure to save configuration by clicking on “File-> Save”.

Once you have saved the configuration, you can restart network service to see if the new configuration is still active.

[root@dbhost1 ~] service network restart

Comment out entries from /etc/resolv.conf to disable DNS search if you are not using DNS to resolve host names. This will greatly improve performance of VM in general especially NFS

Note: in 12c we need to rename /etc/resolv.conf file if you are not using DNS since the pre-requisites check will fail otherwise.

[root@dbhost1 ~]# mv /etc/resolv.conf /etc/resolv.conf_bak

If you are just modifying existing Linux VM and changing host name only then make sure that the new hostname is already present in following file. For fresh installation this will be already present.

Create following entries in /etc/hosts file

[root@dbhost1 ~]# more /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1 localhost.localdomain localhost

#::1 localhost6.localdomain6 localhost6

192.168.1.121 dbhost1.paramlabs.com dbhost1

192.168.1.122 dbhost2.paramlabs.com dbhost2

192.168.1.123 dbhost1-vip.paramlabs.com dbhost1-vip

192.168.1.124 dbhost2-vip.paramlabs.com dbhost2-vip

192.168.2.121 dbhost1-priv.paramlabs.com dbhost1-priv

192.168.2.122 dbhost2-priv.paramlabs.com dbhost2-priv

192.168.1.125 dbhost-scan.paramlabs.com dbhost-scan

192.168.1.121 nfshost.paramlabs.com nfshost

Let me explain why we have made these entries.

dbhost1 and dbhost2 are primary IP addresses for both Virtual nodes. Though we are yet to create node2, let us create these entries so that when we clone/copy this VM, we will already have these entries present in second node.

dbhost1-vip and dbhost2-vip are going to be used as VIP for 11gR2 RAC. These IPs will be assigned to interface aliases on respective host where vip1 and vip2 will be active. This will be handled transparently by Oracle RAC

dbhost1-priv and dbhost2-priv are private interfaces for node1 and node2. These will work as interconnect IP addresses for these nodes. In physical servers, these interfaces are connected through a cross-cable or stand-alone switch (recommended). While in VM, this will be taken care of automatically.

Make sure that you are able to ping primary IP of node1.

[root@ dbhost1~]# ping dbhost1 -c1

PING dbhost1.paramlabs.com (192.168.1.121) 56(84) bytes of data.

64 bytes from dbhost1.paramlabs.com (192.168.1.121): icmp_seq=1 ttl=64 time=0.076 ms

 

— dbhost1.paramlabs.com ping statistics —

1 packets transmitted, 1 received, 0% packet loss, time 0ms

rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms

If you are not using time synchronization using NTP then make sure that the service is stopped.

[root@ dbhost1~]# service ntpd status

ntpd is stopped

Run following on both nodes to rename ntp.conf file

[root@ dbhost1~]# mv /etc/ntp.conf /etc/ntp.conf.orig

Let us now create required users.

First create owner user for Oracle clusterware/grid.

[root@dbhost1 ~]# useradd -g dba -G oinstall grid

[root@dbhost1 ~]# passwd grid

Changing password for user grid.

New UNIX password:

Retype new UNIX password:

passwd: all authentication tokens updated successfully.

 

Since we already have user named oracle (as part of our linux installation steps), we will make sure that the groups are set correctly. This will be database owner user.

[root@ dbhost1~]# usermod -g dba -G oinstall oracle

[root@ dbhost1~]# passwd oracle

Changing password for user oracle.

New UNIX password:

Retype new UNIX password:

passwd: all authentication tokens updated successfully.

Since the prerequisites are already installed as part of our Linux installation we will skip that for now. The clusterware installation might ask for some more pre-requisites but it will provide a fix script to automatically change the values so we can go ahead here.

Let us create the required directories so that when we clone this VM, the second node also will already have these ready.

[root@dbhost1 ~]# mkdir -p /app/oracle

[root@dbhost1 ~]# mkdir -p /app/12.1.0/grid

[root@dbhost1 ~]# chown grid:dba /app

[root@dbhost1 ~]# chown grid:dba /app/oracle

[root@dbhost1 ~]# chown grid:dba /app/12.1.0

[root@dbhost1 ~]# chown grid:dba /app/12.1.0/grid

[root@dbhost1 ~]# chmod -R g+w /app

Now let us create the mount points. Here we will mount the shared file system on both nodes. For now just create these files. The shared file system will be created in later step.

[root@dbhost1 ~]# mkdir /u01

[root@dbhost1 ~]# mkdir /u02

[root@dbhost1 ~]# mkdir /u03

[root@dbhost1 ~]# chown grid:dba /u01

[root@dbhost1 ~]# chown grid:dba /u02

[root@dbhost1 ~]# chown grid:dba /u03

[root@dbhost1 ~]# chmod g+w /u01

[root@dbhost1 ~]# chmod g+w /u02

[root@dbhost1 ~]# chmod g+w /u03

This concludes the prerequisite steps to prepare basic Linux VM for RAC. In next steps we will clone this VM and then create node 2 for RAC.

Next: Copy/clone this virtual machine to create second node and modify host details

Oracle 12c (12.1) RAC (Real Applications Cluster) installation on Linux Virtual Machines – Step by step guide

1. Create Virtual Machine and install 64 bit Linux
2. Add additional virtual Ethernet card and perform prerequisites in Linux
3. Copy/clone this virtual machine to create second node and modify host details
4. Setup shared file system and other pre-requisites
5. Install Oracle Grid Infrastructure
6. Install Oracle Database software and create RAC database

Jul 9th, 2013 | Posted by Tushar Thakker | Filed under Linux/Unix/Solaris, Oracle, Oracle Database, Oracle DBA, Oracle RAC, VirtualBox, Virtualization

Installing 12c RAC on Linux VM: Setup shared file system and other pre-requisites

Previous: Copy/clone this virtual machine to create second node and modify host details

Now we need to setup shared file system for these nodes since having shared storage is a must for Oracle RAC

Since we are not using any external storage, NAS or SAN, we will host the shared file system on node 1 and share the same with node 2 using NFS. This is not recommended for production but since we are not using external storage for VMs this is the best way to achieve shared file system.

Note: Please do not use VMWare’s shared folders option since it is not cluster-aware and cannot make sure that read/writes from multiple hosts are handled properly. This can cause cluster panic so use NFS only for this VM purpose. Also we are not using VirtualBox shared disk option since we want to keep it generic in nature.

Since we are not using shared storage for Virtual Machines, we are sharing disks via NFS from one node to another. Here is how the VMs can be represented.

11gR2-VM-visio

Login to Node 1 using root user. Please note that these steps are ONLY to be done on Node 1, not on node 2.

Let us first create the directories which will host the shared data.

[root@dbhost1 ~]# mkdir /shared_1

[root@dbhost1 ~]# mkdir /shared_2

[root@dbhost1 ~]# mkdir /shared_3

[root@dbhost1 ~]# chown grid:dba /shared_1

[root@dbhost1 ~]# chown grid:dba /shared_2

[root@dbhost1 ~]# chown grid:dba /shared_3

[root@dbhost1 ~]# chmod g+w /shared_1

[root@dbhost1 ~]# chmod g+w /shared_2

[root@dbhost1 ~]# chmod g+w /shared_3

Now we need to enable these directories to be shared over NFS. Enter following details in /etc/exports file

You will need uid and guid for user grid. You can find it by executing “id” command while logged in as “grid” user.

[grid@dbhost1 ~]# id
uid=54322(grid) gid=54322(dba) groups=54322(dba),54323(oinstall)

[root@dbhost1 ~]# more /etc/exports

/shared_1 *(rw,sync,no_root_squash,insecure,anonuid=54322,anongid=54322)

/shared_2 *(rw,sync,no_root_squash,insecure,anonuid=54322,anongid=54322)

/shared_3 *(rw,sync,no_root_squash,insecure,anonuid=54322,anongid=54322)

Check if NFS service is running. If not we need to start it.

[root@ dbhost1~]# service nfs status

rpc.mountd is stopped

nfsd is stopped

rpc.rquotad is stopped

[root@ dbhost1~]# chkconfig nfs on

[root@ dbhost1~]# service nfs start

Starting NFS services:                                     [  OK  ]

Starting NFS quotas:                                       [  OK  ]

Starting NFS daemon:                                       [  OK  ]

Starting NFS mountd:                                       [  OK  ]

Stopping RPC idmapd:                                       [  OK  ]

Starting RPC idmapd:                                       [  OK  ]

These above steps have enabled the newly created shared directories for NFS mount.

Now following steps need to be done on both Node 1 and Node 2

“Append” following entries in /etc/fstab file

[root@dbhost1 ~]# tail -3 /etc/fstab

nfshost:/shared_1 /u01 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
nfshost:/shared_2 /u02 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
nfshost:/shared_3 /u03 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0

 

[root@dbhost2 ~]# tail -3 /etc/fstab

nfshost:/shared_1 /u01 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0

nfshost:/shared_2 /u02 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0

nfshost:/shared_3 /u03 nfs rw,bg,hard,nolock,noac,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0

 

Let us mount each of these shared directories to the mount points created earlier. This is not required to be done from next restart since system will automatically mount them based on /etc/fstab entries.

[root@dbhost1 ~]# mount /u01

[root@dbhost1 ~]# mount /u02

[root@dbhost1 ~]# mount /u03

Same on node 2
[root@dbhost2 ~]# mount /u01
[root@dbhost2 ~]# mount /u02
[root@dbhost2 ~]# mount /u03

Confirm if these directories are mounted correctly.

[root@dbhost1 ~]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

283G 4.6G 264G 2% /

/dev/sda1 99M 41M 53M 44% /boot

tmpfs 1002M 0 1002M 0% /dev/shm

nfshost:/shared_1 283G 4.6G 264G 2% /u01

nfshost:/shared_2 283G 4.6G 264G 2% /u02

nfshost:/shared_3 283G 4.6G 264G 2% /u03

Same on node 2
[root@dbhost2 ~]# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

283G 4.6G 264G 2% /

/dev/sda1 99M 41M 53M 44% /boot

tmpfs 1002M 0 1002M 0% /dev/shm

nfshost:/shared_1 283G 4.6G 264G 2% /u01

nfshost:/shared_2 283G 4.6G 264G 2% /u02

nfshost:/shared_3 283G 4.6G 264G 2% /u03

As I said, next time when you restart, automatically these directories will be mounted,

IMPORTANT NOTE: Whenever you are restarting both servers, make sure that you let Node 1 start properly before starting Node 2. The reason being, the shared file system is hosted on Node 1 so if node 2 starts before node 1 has enabled NFS share during startup, the mount points will not come up on Node 2. This will especially be a problem once our clusterware and RAC database are installed.

Setting up user-equivalence on both nodes (Passwordless SSH setup)

Having passwordless ssh connectivity is a must for RAC installation. This is required for both nodes to communicate with each other and passing commands to each node during runtime and maintenance.

We will setup user-equivalence for both oracle and grid users. Let us start with grid owner user grid first.

Perform following on Node1

We will use all 4 combinations every time. i.e. dbhost1, dbhost1.paramlabs.com (fully qualified domain name), dbhost2, dbhost2.paramlabs.com

[grid@dbhost1 ~]$ ssh grid@dbhost2

The authenticity of host ‘dbhost2 (192.168.1.122)’ can’t be established.

RSA key fingerprint is

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost2,192.168.1.122’ (RSA) to the list of known hosts.

grid@dbhost2’s password:

[grid@dbhost2 ~]$ exit

logout

Connection to dbhost2 closed.

 

[grid@dbhost1 ~]$ ssh grid@dbhost1

The authenticity of host ‘dbhost1 (192.168.1.121)’ can’t be established.

RSA key fingerprint is

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost1,192.168.1.121’ (RSA) to the list of known hosts.

grid@dbhost1’s password:

[grid@dbhost1 ~]$ exit

logout

Connection to dbhost1 closed.

 

[grid@dbhost1 ~]$ ssh grid@dbhost1.paramlabs.com

The authenticity of host ‘dbhost1.paramlabs.com (192.168.1.121)’ can’t be established.

RSA key fingerprint is

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost1.paramlabs.com’ (RSA) to the list of known hosts.

grid@dbhost1.paramlabs.com’s password:

 

[grid@dbhost1 ~]$ ssh grid@dbhost2.paramlabs.com

The authenticity of host ‘dbhost2.paramlabs.com (192.168.1.122)’ can’t be established.

RSA key fingerprint is

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost2.paramlabs.com’ (RSA) to the list of known hosts.

grid@dbhost2.paramlabs.com’s password:

Last login: Mon Jul 8 13:24:19 2013 from dbhost1.paramlabs.com

 

[grid@dbhost1 ~]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/grid/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/grid/.ssh/id_dsa.

Your public key has been saved in /home/grid/.ssh/id_dsa.pub.

The key fingerprint is:

xxxxxx grid@dbhost1.paramlabs.com

 

[grid@dbhost1 ~]$ ls -ltr /home/grid/.ssh/

total 12

-rw-r–r– 1 grid dba 1612 Jul 8 13:25 known_hosts

-rw-r–r– 1 grid dba 616 Jul 8 13:26 id_dsa.pub

-rw——- 1 grid dba 668 Jul 8 13:26 id_dsa

 

[grid@dbhost2 ~]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/grid/.ssh/id_dsa):

Created directory ‘/home/grid/.ssh’.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/grid/.ssh/id_dsa.

Your public key has been saved in /home/grid/.ssh/id_dsa.pub.

The key fingerprint is:

xxxx grid@dbhost2.paramlabs.com

 

[grid@dbhost2 ~]$ ls -ltr /home/grid/.ssh

total 8

-rw-r–r– 1 grid dba 616 Jul 8 13:28 id_dsa.pub

-rw——- 1 grid dba 668 Jul 8 13:28 id_dsa

 

[grid@dbhost1 ~]$ scp /home/grid/.ssh/id_dsa.pub grid@dbhost2:/home/grid/.ssh/authorized_keys

grid@dbhost2’s password:

id_dsa.pub 100% 616 0.6KB/s 00:00

 

[grid@dbhost2 ~]$ scp /home/grid/.ssh/id_dsa.pub grid@dbhost1:/home/grid/.ssh/authorized_keys

The authenticity of host ‘dbhost1 (192.168.1.121)’ can’t be established.

RSA key fingerprint is

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost1,192.168.1.121’ (RSA) to the list of known hosts.

grid@dbhost1’s password:

id_dsa.pub 100% 616 0.6KB/s 00:00

 

[grid@dbhost1 ~]$ cat /home/grid/.ssh/id_dsa.pub >> /home/grid/.ssh/authorized_keys

[grid@dbhost2 ~]$ cat /home/grid/.ssh/id_dsa.pub >> /home/grid/.ssh/authorized_keys

 

Now let us test the passwordless connectivity between these 2 nodes using grid user.  It should not prompt for password. In that case the test will be successful.

 

[grid@dbhost1 ~]$ ssh grid@dbhost1

Last login: Mon Jul 8 13:30:02 2013 from dbhost2.paramlabs.com

[grid@dbhost1 ~]$ exit

logout

Connection to dbhost1 closed.

 

[grid@dbhost1 ~]$ ssh grid@dbhost2

Last login: Mon Jul 8 13:30:10 2013 from dbhost2.paramlabs.com

[grid@dbhost2 ~]$ exit

logout

Connection to dbhost2 closed.

 

[grid@dbhost1 ~]$ ssh grid@dbhost1.paramlabs.com

Last login: Mon Jul 8 13:31:38 2013 from dbhost1.paramlabs.com

[grid@dbhost1 ~]$ exit

logout

Connection to dbhost1.paramlabs.com closed.

 

[grid@dbhost1 ~]$ ssh grid@dbhost2.paramlabs.com

Last login: Mon Jul 8 13:31:41 2013 from dbhost1.paramlabs.com

[grid@dbhost2 ~]$ exit

logout

Connection to dbhost2.paramlabs.com closed.

 

Let’s do the same on dbhost2

 

[grid@dbhost2 ~]$ ssh grid@dbhost1

Last login: Mon Jul 8 13:31:46 2013 from dbhost1.paramlabs.com

[grid@dbhost1 ~]$ exit

logout

Connection to dbhost1 closed.

 

[grid@dbhost2 ~]$ ssh grid@dbhost2

Last login: Mon Jul 8 13:31:50 2013 from dbhost1.paramlabs.com

[grid@dbhost2 ~]$ exit

logout

Connection to dbhost2 closed.

 

[grid@dbhost2 ~]$ ssh grid@dbhost1.paramlabs.com

The authenticity of host ‘dbhost1.paramlabs.com (192.168.1.121)’ can’t be established.

RSA key fingerprint is

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost1.paramlabs.com’ (RSA) to the list of known hosts.

Last login: Mon Jul 8 13:32:09 2013 from dbhost2.paramlabs.com

[grid@dbhost1 ~]$ exit

logout

Connection to dbhost1.paramlabs.com closed.

 

[grid@dbhost2 ~]$ ssh grid@dbhost2.paramlabs.com

The authenticity of host ‘dbhost2.paramlabs.com (192.168.1.122)’ can’t be established.

RSA key fingerprint is

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost2.paramlabs.com’ (RSA) to the list of known hosts.

Last login: Mon Jul 8 13:32:11 2013 from dbhost2.paramlabs.com

[grid@dbhost2 ~]$ exit

logout

Connection to dbhost2.paramlabs.com closed.

 

This concludes passwordless ssh setup for grid user. Let us do the same exercise for oracle user.

 

Let us generate public key for dbhost1. Accept defaults and press enter whenever prompted.

 

[oracle@dbhost1 ~]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oracle/.ssh/id_dsa.

Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.

The key fingerprint is:

xxxxx oracle@dbhost1.paramlabs.com

 

Now let us generate public key for dbhost2. Accept defaults and press enter whenever prompted.

 

[oracle@dbhost2 ~]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_dsa):

Created directory ‘/home/oracle/.ssh’.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oracle/.ssh/id_dsa.

Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.

The key fingerprint is:

xxxxx oracle@dbhost2.paramlabs.com

 

[oracle@dbhost1 ~]$ ls -ltr /home/oracle/.ssh/

total 8

-rw-r–r– 1 oracle oinstall 618 Jul 8 19:55 id_dsa.pub

-rw——- 1 oracle oinstall 672 Jul 8 19:55 id_dsa

 

[oracle@dbhost2 ~]$ ls -ltr /home/oracle/.ssh/

total 8

-rw-r–r– 1 oracle oinstall 618 Jul 8 19:55 id_dsa.pub

-rw——- 1 oracle oinstall 668 Jul 8 19:55 id_dsa

 

Now we need to copy this public key to second host in order to authorize it to connect to this host using this (oracle) user without password. We will save this public key in a file named authorized_keys on node 2.

 

[oracle@dbhost1~]$ cd /home/oracle/.ssh/

[oracle@dbhost1 .ssh]$ scp id_dsa.pub oracle@dbhost2:/home/oracle/.ssh/authorized_keys

The authenticity of host ‘dbhost2 (192.168.1.122)’ can’t be established.

RSA key fingerprint is

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost2,192.168.1.122’ (RSA) to the list of known hosts.

oracle@dbhost2’s password:

id_dsa.pub

 

[oracle@dbhost2 ~]$ cd /home/oracle/.ssh/

[oracle@dbhost2 ~]$ scp id_dsa.pub oracle@dbhost1:/home/oracle/.ssh/authorized_keys

The authenticity of host ‘dbhost1 (192.168.1.121)’ can’t be established.

RSA key fingerprint is

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost1,192.168.1.121’ (RSA) to the list of known hosts.

oracle@dbhost1’s password:

id_dsa.pub 100% 618 0.6KB/s 00:00

 

Also we will append the public key for dbhost1 to itself as well so that it can do ssh to samebox without password (2 of the 4 combinations)

 

[oracle@dbhost1 .ssh]$ cat id_dsa.pub >> authorized_keys

[oracle@dbhost2 .ssh]$ cat id_dsa.pub >> authorized_keys

 

Now let us test the passwordless connectivity between these 2 nodes using oracle user.

 

[oracle@dbhost1 .ssh]$ ssh oracle@dbhost1

The authenticity of host ‘dbhost1 (192.168.1.121)’ can’t be established.

RSA key fingerprint is

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost1,192.168.1.121’ (RSA) to the list of known hosts.

Last login: Mon Jul 8 19:54:43 2013 from 192.168.1.181

[oracle@dbhost1 ~]$ exit

logout

Connection to dbhost1 closed.

 

[oracle@dbhost1 .ssh]$ ssh oracle@dbhost2

Last login: Mon Jul 8 19:54:49 2013 from 192.168.1.181

[oracle@dbhost2 ~]$ exit

logout

Connection to dbhost2 closed.

 

[oracle@dbhost1 .ssh]$ ssh oracle@dbhost1.paramlabs.com

The authenticity of host ‘dbhost1.paramlabs.com (192.168.1.121)’ can’t be established.

RSA key fingerprint is a

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost1.paramlabs.com’ (RSA) to the list of known hosts.

Last login: Mon Jul 8 19:58:48 2013 from dbhost1.paramlabs.com

[oracle@dbhost1 ~]$ exit

logout

Connection to dbhost1.paramlabs.com closed.

 

[oracle@dbhost1 .ssh]$ ssh oracle@dbhost2.paramlabs.com

The authenticity of host ‘dbhost2.paramlabs.com (192.168.1.122)’ can’t be established.

RSA key fingerprint is

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost2.paramlabs.com’ (RSA) to the list of known hosts.

Last login: Mon Jul 8 19:58:51 2013 from dbhost1.paramlabs.com

[oracle@dbhost2 ~]$ exit

logout

Connection to dbhost2.paramlabs.com closed.

 

Now do the similar steps on Node 2 (dbhost2)

 

[oracle@dbhost2 .ssh]$ ssh oracle@dbhost1

Last login: Mon Jul 8 19:58:57 2013 from dbhost1.paramlabs.com

[oracle@dbhost1 ~]$ exit

logout

Connection to dbhost1 closed.

 

[oracle@dbhost2 .ssh]$ ssh oracle@dbhost2

The authenticity of host ‘dbhost2 (192.168.1.122)’ can’t be established.

RSA key fingerprint is

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost2,192.168.1.122’ (RSA) to the list of known hosts.

Last login: Mon Jul 8 19:59:02 2013 from dbhost1.paramlabs.com

[oracle@dbhost2 ~]$ exit

logout

Connection to dbhost2 closed.

 

[oracle@dbhost2 .ssh]$ ssh oracle@dbhost1.paramlabs.com

The authenticity of host ‘dbhost1.paramlabs.com (192.168.1.121)’ can’t be established.

RSA key fingerprint is

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost1.paramlabs.com’ (RSA) to the list of known hosts.

Last login: Mon Jul 8 19:59:17 2013 from dbhost2.paramlabs.com

[oracle@dbhost1 ~]$ exit

logout

Connection to dbhost1.paramlabs.com closed.

 

[oracle@dbhost2 .ssh]$ ssh oracle@dbhost2.paramlabs.com

The authenticity of host ‘dbhost2.paramlabs.com (192.168.1.122)’ can’t be established.

RSA key fingerprint is

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added ‘dbhost2.paramlabs.com’ (RSA) to the list of known hosts.

Last login: Mon Jul 8 19:59:20 2013 from dbhost2.paramlabs.com

[oracle@dbhost2 ~]$ exit

logout

Connection to dbhost2.paramlabs.com closed.

Next: Install Oracle Grid Infrastructure

Oracle 12c (12.1) RAC (Real Applications Cluster) installation on Linux Virtual Machines – Step by step guide

1. Create Virtual Machine and install 64 bit Linux
2. Add additional virtual Ethernet card and perform prerequisites in Linux
3. Copy/clone this virtual machine to create second node and modify host details
4. Setup shared file system and other pre-requisites
5. Install Oracle Grid Infrastructure
6. Install Oracle Database software and create RAC database

Jul 9th, 2013 | Posted by Tushar Thakker | Filed under Linux/Unix/Solaris, Oracle, Oracle Database, Oracle DBA, Oracle RAC, VirtualBox, Virtualization