Installing 11g RAC on Linux VM: Setup shared file system and other pre-requisites
Previous: Copy/clone this virtual machine to create second node and modify host details
Now we need to setup shared file system for these nodes since having shared storage is a must for Oracle RAC
Since we are not using any external storage, NAS or SAN, we will host the shared file system on node 1 and share the same with node 2 using NFS. This is not recommended for production but since we are not using external storage for VMs this is the best way to achieve shared file system.
Note: Please do not use VMWare’s shared folders option since it is not cluster-aware and cannot make sure that read/writes from multiple hosts are handled properly. This can cause cluster panic so use NFS only for this VM purpose.
Since we are not using shared storage for Virtual Machines, we are sharing disks via NFS from one node to another. Here is how the VMs can be represented.
Login to Node 1 using root user. Please note that these steps are ONLY to be done on Node 1, not on node 2.
Let us first create the directories which will host the shared data.
[root@dbhost1 ~]# mkdir /shared_1
[root@dbhost1 ~]# mkdir /shared_2
[root@dbhost1 ~]# mkdir /shared_3
[root@dbhost1 ~]# chown oracle:dba /shared_1
[root@dbhost1 ~]# chown oracle:dba /shared_2
[root@dbhost1 ~]# chown oracle:dba /shared_3
[root@dbhost1 ~]# chmod g+w /shared_1
[root@dbhost1 ~]# chmod g+w /shared_2
[root@dbhost1 ~]# chmod g+w /shared_3
Now we need to enable these directories to be shared over NFS. Enter following details in /etc/exports file
[root@dbhost1 ~]# more /etc/exports
/shared_1 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/shared_2 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/shared_3 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
Check if NFS service is running. If not we need to start it.
[root@ dbhost1~]# service nfs status
rpc.mountd is stopped
nfsd is stopped
rpc.rquotad is stopped
[root@ dbhost1~]# chkconfig nfs on
[root@ dbhost1~]# service nfs start
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS daemon: [ OK ]
Starting NFS mountd: [ OK ]
Stopping RPC idmapd: [ OK ]
Starting RPC idmapd: [ OK ]
These above steps have enabled the newly created shared directories for NFS mount.
Now following steps need to be done on both Node 1 and Node 2
“Append” following entries in /etc/fstab file
[root@dbhost1 ~]# tail -3 /etc/fstab
nfshost:/shared_1 /u01 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
nfshost:/shared_2 /u02 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
nfshost:/shared_3 /u03 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
Let us mount each of these shared directories to the mount points created earlier. This is not required to be done from next restart since system will automatically mount them based on /etc/fstab entries.
[root@dbhost1 ~]# mount /u01
[root@dbhost1 ~]# mount /u02
[root@dbhost1 ~]# mount /u03
Confirm if these directories are mounted correctly.
[root@dbhost1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
283G 4.6G 264G 2% /
/dev/sda1 99M 41M 53M 44% /boot
tmpfs 749M 0 749M 0% /dev/shm
.host:/ 301G 248G 53G 83% /mnt/hgfs
nfshost:/shared_1 283G 4.6G 264G 2% /u01
nfshost:/shared_2 283G 4.6G 264G 2% /u02
nfshost:/shared_3 283G 4.6G 264G 2% /u03
As I said, next time when you restart, automatically these directories will be mounted,
IMPORTANT NOTE: Whenever you are restarting both servers, make sure that you let Node 1 start properly before starting Node 2. The reason being, the shared file system is hosted on Node 1 so if node 2 starts before node 1 has enabled NFS share during startup, the mount points will not come up on Node 2. This will especially be a problem once our clusterware and RAC database are installed.
Setting up user-equivalence on both nodes (Passwordless SSH setup)
Having passwordless ssh connectivity is a must for RAC installation. This is required for both nodes to communicate with each other and passing commands to each node during runtime and maintenance.
We will setup user-equivalence for both oracle and oradb users. Let us start with grid owner user oracle first.
Perform following on Node1
Let’s first create known_hosts entries so that it does not prompt to add host next time. We will use all 4 combinations every time. i.e. dbhost1, dbhost1.paramlabs.com (fully qualified domain name), dbhost2, dbhost2.paramlabs.com
[root@ dbhost1~]# su – oracle
[oracle@ dbhost1~]$ ssh oracle@dbhost2
The authenticity of host ‘dbhost2 (192.168.112.102)’ can’t be established.
RSA key fingerprint is af:5f:9d:92:e3:9c:4b:f0:62:15:92:16:00:b3:a2:c2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘dbhost2,192.168.112.102’ (RSA) to the list of known hosts.
oracle@dbhost2’s password:
[oracle@ dbhost1~]$ ssh oracle@dbhost1
The authenticity of host ‘dbhost1 (192.168.112.101)’ can’t be established.
RSA key fingerprint is af:5f:9d:92:e3:9c:4b:f0:62:15:92:16:00:b3:a2:c2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘dbhost1,192.168.112.101’ (RSA) to the list of known hosts.
oracle@dbhost1’s password:
[oracle@ dbhost1~]$ ssh oracle@dbhost1.paramlabs.com
…
Are you sure you want to continue connecting (yes/no)? yes
…
[oracle@ dbhost1~]$ ssh oracle@dbhost2.paramlabs.com
…
Are you sure you want to continue connecting (yes/no)? yes
…
Now let us generate public key for dbhost1. Accept defaults and press enter whenever prompted.
[oracle@ dbhost1~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
da:dd:34:90:3a:cb:01:b8:42:4d:14:90:4e:7d:4e:f1 oracle@ dbhost1.paramlabs.com
This will generate following files in <home directory>/.ssh directory
[oracle@ dbhost1~]$ ls -ltr /home/oracle/.ssh/
total 12
-rw-r–r– 1 oracle dba 618 Feb 17 13:10 id_dsa.pub
-rw——- 1 oracle dba 668 Feb 17 13:10 id_dsa
-rw-r–r– 1 oracle dba 1616 Feb 17 13:11 known_hosts
Now we need to copy this public key to second host in order to authorize it to connect to this host using this (oracle) user without password. We will save this public key in a file named authorized_keys on node 2.
[oracle@ dbhost1~]$ scp /home/oracle/.ssh/id_dsa.pub oracle@dbhost2:/home/oracle/.ssh/authorized_keys
oracle@dbhost2’s password:
id_dsa.pub 100% 618 0.6KB/s 00:00
Also we will append the public key for dbhost1 to itself as well so that it can do ssh to samebox without password (2 of the 4 combinations)
[oracle@ dbhost1~]$ cat /home/oracle/.ssh/id_dsa.pub >> /home/oracle/.ssh/authorized_keys
Now do the similar steps on Node 2 (dbhost2)
[oracle@ dbhost2~]$ ssh oracle@dbhost1
The authenticity of host ‘dbhost1 (192.168.112.101)’ can’t be established.
RSA key fingerprint is af:5f:9d:92:e3:9c:4b:f0:62:15:92:16:00:b3:a2:c2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘dbhost1,192.168.112.101’ (RSA) to the list of known hosts.
oracle@dbhost1’s password:
[oracle@ dbhost2~]$ ssh oracle@dbhost1.paramlabs.com
…
Are you sure you want to continue connecting (yes/no)? yes
…
[oracle@ dbhost2~]$ ssh oracle@dbhost2
…
Are you sure you want to continue connecting (yes/no)? yes
…
[oracle@ dbhost2~]$ ssh oracle@dbhost2.paramlabs.com
…
Are you sure you want to continue connecting (yes/no)? yes
…
Generate the public key on dbhost2.
[oracle@ dbhost2~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
04:bb:86:9d:b4:e6:33:28:62:e4:fa:ff:02:ba:8c:ea oracle@ dbhost2.paramlabs.com
[oracle@ dbhost2~]$ ls -ltr /home/oracle/.ssh/
total 16
-rw-r–r– 1 oracle dba 1616 Feb 17 13:10 known_hosts
-rw-r–r– 1 oracle dba 618 Feb 17 13:13 authorized_keys
-rw-r–r– 1 oracle dba 618 Feb 17 13:13 id_dsa.pub
-rw——- 1 oracle dba 672 Feb 17 13:13 id_dsa
[oracle@ dbhost2~]$ scp /home/oracle/.ssh/id_dsa.pub oracle@dbhost1:/home/oracle/.ssh/authorized_keys
oracle@dbhost1’s password:
id_dsa.pub 100% 618 0.6KB/s 00:00
[oracle@ dbhost2~]$ cat /home/oracle/.ssh/id_dsa.pub >> /home/oracle/.ssh/authorized_keys
Now let us test the passwordless connectivity between these 2 nodes using Oracle user.
On Node 1
[oracle@ dbhost1~]$ ssh oracle@dbhost1
[oracle@ dbhost1~]$ exit
As you can see it did not prompt for password. Check the same for all 4 combinations. Make sure to exit once each ssh is tested to return back to original shell.
[oracle@ dbhost1~]$ ssh oracle@dbhost2
Last login: Sun Feb 17 13:14:23 2013 from dbhost1.paramlabs.com
[oracle@ dbhost1~]$ exit
[oracle@ dbhost1~]$ ssh oracle@dbhost1.paramlabs.com
Last login: Sun Feb 17 13:15:59 2013 from dbhost1.paramlabs.com
[oracle@ dbhost1~]$ exit
[oracle@ dbhost1~]$ ssh oracle@dbhost2.paramlabs.com
Last login: Sun Feb 17 13:16:03 2013 from dbhost1.paramlabs.com
[oracle@ dbhost1~]$ exit
On Node2
[oracle@ dbhost2~]$ ssh dbhost1
Last login: Sun Feb 17 13:16:12 2013 from dbhost1.paramlabs.com
[oracle@ dbhost2~]$ exit
[oracle@ dbhost2~]$ ssh dbhost2
Last login: Sun Feb 17 13:16:16 2013 from dbhost1.paramlabs.com
[oracle@ dbhost2~]$ exit
[oracle@ dbhost2~]$ ssh dbhost1.paramlabs.com
Last login: Sun Feb 17 13:17:12 2013 from dbhost2.paramlabs.com
[oracle@ dbhost2~]$ exit
[oracle@ dbhost2~]$ ssh dbhost2.paramlabs.com
Last login: Sun Feb 17 13:17:16 2013 from dbhost2.paramlabs.com
[oracle@ dbhost2~]$ exit
Now we have to exactly the same steps for database owner user oradb. Here I am skipping creation of known_hosts since anyway after first attempt to test ssh, it will create the entries.
[oradb@dbhost1 ~]$ ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/oradb/.ssh/id_dsa):
Created directory ‘/home/oradb/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oradb/.ssh/id_dsa.
Your public key has been saved in /home/oradb/.ssh/id_dsa.pub.
The key fingerprint is:
6e:fb:1e:55:cc:41:b9:a4:d3:7c:26:cf:f9:39:8c:a7 oradb@dbhost1.paramlabs.com
[oradb@dbhost1 ~]$ cd /home/oradb/.ssh/
[oradb@dbhost1 .ssh]$ scp id_dsa.pub oradb@dbhost2:/home/oradb/.ssh/authorized_keys
The authenticity of host ‘dbhost2 (192.168.112.102)’ can’t be established.
RSA key fingerprint is af:5f:9d:92:e3:9c:4b:f0:62:15:92:16:00:b3:a2:c2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘dbhost2,192.168.112.102’ (RSA) to the list of known hosts.
oradb@dbhost2’s password:
id_dsa.pub 100% 617 0.6KB/s 00:00
[oradb@dbhost1 .ssh]$ cat id_dsa.pub >> authorized_keys
On Node 2
[oradb@dbhost2 ~]$ ssh-keygen -t dsa
…
[oradb@dbhost2 ~]$ cd /home/oradb/.ssh/
[oradb@dbhost2 .ssh]$ scp id_dsa.pub oradb@dbhost1:/home/oradb/.ssh/authorized_keys
…
Are you sure you want to continue connecting (yes/no)? yes
…
oradb@dbhost1’s password:
id_dsa.pub 100% 617 0.6KB/s 00:00
[oradb@dbhost2 .ssh]$ cat id_dsa.pub >> authorized_keys
Now let us test the passwordless ssh for oradb user on both nodes.
On Node 1
[oradb@dbhost1 .ssh]$ ssh oradb@dbhost1
…
Are you sure you want to continue connecting (yes/no)? yes
…
[oradb@dbhost1 ~]$ exit
[oradb@dbhost1 .ssh]$ ssh oradb@dbhost2
[oradb@dbhost2 ~]$ exit
[oradb@dbhost1 .ssh]$ ssh oradb@dbhost1.paramlabs.com
…
Are you sure you want to continue connecting (yes/no)? yes
…
[oradb@dbhost1 ~]$ exit
[oradb@dbhost1 .ssh]$ ssh oradb@dbhost2.paramlabs.com
…
Are you sure you want to continue connecting (yes/no)? yes
…
[oradb@dbhost2 ~]$ exit
On Node 2
[oradb@dbhost2 .ssh]$ ssh oradb@dbhost1
Last login: Tue Feb 19 13:26:27 2013 from dbhost1.paramlabs.com
[oradb@dbhost1 ~]$ exit
[oradb@dbhost2 .ssh]$ ssh oradb@dbhost2
…
Are you sure you want to continue connecting (yes/no)? yes
[oradb@dbhost2 ~]$ exit
[oradb@dbhost2 .ssh]$ ssh oradb@dbhost1.paramlabs.com
…
Are you sure you want to continue connecting (yes/no)? yes
[oradb@dbhost1 ~]$ exit
[oradb@dbhost2 .ssh]$ ssh oradb@dbhost2.paramlabs.com
…
Are you sure you want to continue connecting (yes/no)? yes
..
[oradb@dbhost2 ~]$ exit
Next: Install Oracle Clusterware
1. Create Virtual Machine and install 64 bit Linux (generic step from previous post, not specific to this guide)
2. Add additional virtual Ethernet card and perform prerequisites in Linux
3. Copy/clone this virtual machine to create second node and modify host details
4. Setup shared file system and other pre-requisites
please share link to download these packages.
glibc-devel-32bit-2.9-13.2
libgcc43-4.3.3_20081022
libstdc++43-4.3.3_20081022-11.18
gcc-32bit-4.3
libaio-32bit-0.3.104
libaio-devel-32bit-0.3.104
libstdc++43-32bit-4.3.3_20081022
Hi Guys
While mounting the drive mount /u01 it is showing me an error
mount:nfshost :/shared_1 failed,reason given by server : permission denied
which permission it is talking about..??
hi tushar ,
great work, when I completed the entries in etc/exports and /etc/fstab…
nfshost:/shared_1 /u01 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
nfshost:/shared_2 /u02 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
nfshost:/shared_3 /u03 nfs rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0 0 0
and then I mount by issuing.
mount /u01 ……….
it mention that ..”mount.nfs; an incorrect mount option was specified”…
kindly check and correct me …
Dear Shahzad,
If the NFS version in your OS is different then you might need to give vers=2 or 4 instead of 3. Otherwise please make sure that nfshost is reachable, the NFS service is running and check /etc/exports file again. If everything is fine, try mounting with just rw,bg,hard options and see if it mounts. Some times we might have to check the options one by one to see which is causing the issue.
-Tushar
Thank you very much. I didn’t have the alias correct in the /etc/hosts entry. I am glad your response lead me to look at it again and the problem was solved after editing.
Regards
Carl
Thanks for doing this. You have no idea how much help you have offered me. I have followed the instructions and I got stuck mounting the directories created. The error I am getting is ‘mount: can’t get address for nfshost’. Please advise.
Thanks once again.
Carl.
Dear Carl,
As we mentioned in our post, you must add /etc/hosts entry for nfshost pointing to same IP of node 1 since we are using node 1 as our NFS host for this example.
Regards
Tushar