Add an additional cluster node to Veritas SFHA under Linux

I don’t know if this is the right way to do it. All I can say is that I did it that way the other day and it worked like a charm without interrupting anything on our LIVE cluster.
We migrated a while ago to a newer version of our application. The new version was not able to run under RHEL5. The problem was, that we had to keep the old version up and running while we build a new cluster under RHEL6 and VCS6. The plan was to build an independent cluster and then add the old nodes one by one to it as soon as the old cluster node was not necessary anymore. Building the new cluster in using the installer was really easy. Adding the first ‘old’ cluster node was easy too, because the system wasn’t live that time and problems with resource groups (offlining themselves unexpectedly) weren’t a problem. So this three node cluster was running without problems for weeks and there was still that second old node that needed to be attached to the new cluster. To prepare it I installed the OS and configured it the same way all the other cluster nodes were configured. I made it SSH accessible from the other cluster nodes. I googled for a good way on how to add it to the cluster without impact and I found a web page telling me that the ‘installvcs -addnode’ command should be able to do it. In my case I had to use the ‘./installsfha602 -addnode’ command which you can find in ‘/opt/VRTS/install/’. The first run showed that I had to install the rpms. Nobody told me that on the web page 🙁

    Checking installed product on node1 ................................................................................................. Failed
CPI ERROR V-9-40-3319 The following errors were discovered on the systems:

SFHA is not installed completely on node1. At least the minimal package set for SFHA should be installed prior to adding this node to a running cluster

I cd’ed to my cluster software installation folder ‘/install/rhel6_x86_64/rpms’ and executed
#yum localinstall -y *
It took a while until all packages were installed. Another run of ‘./installsfha602 -addnode’ looked more promising:

                                            Veritas Storage Foundation and High Availability 6.0.2 Add Node Program

Enter one node of the SFHA cluster to which you would like to add one or more new nodes: node2
    Checking communication on node2 ................................................................................................... Done
    Checking release compatibility on node2 ........................................................................................... Done

                                            Veritas Storage Foundation and High Availability 6.0.2 Add Node Program

Following cluster information detected:

        Cluster Name: livecl
        Cluster ID: 44177
        Systems: node3 node4 node2

Is this information correct? [y,n,q] (y)
    Checking communication on node3 ................................................................................................... Done
    Checking communication on node4 ................................................................................................... Done
    Checking VCS running state on node3 ............................................................................................... Done
    Checking VCS running state on node4 ............................................................................................... Done
    Checking VCS running state on node2 ............................................................................................... Done

                                            Veritas Storage Foundation and High Availability 6.0.2 Add Node Program

Enter the system names separated by spaces to add to the cluster: nodeb1
    Checking communication on node1 ................................................................................................... Done
    Checking release compatibility on node1 ........................................................................................... Done
    Checking swap space on node1 ...................................................................................................... Done
Do you want to add the system(s) sclodb1 to the cluster livecl? [y,n,q] (y)
    Checking installed product on cluster livecl .......................................................................... SFHA 6.0.200.000
    Checking installed product on node1 ............................................................................................... Failed
CPI ERROR V-9-40-3319 The following errors were discovered on the systems:

VCS is installed on sclodb1, but does not have valid license

I used the installer ‘/install/rhel6_x86_64/installer’ to license the product. and ran the ‘./installsfha602 -addnode’ again. This time the installation routine started without failures. I accepted the default values for the interfaces. The installer then tried to start all the services. It failed to start vxfen and had. Damn forgot to present the box to the SAN. So the fencing disks were not visible for the cluster node. After presenting the cluster node to the SAN and a scsi-rescan, my disks were there and I was able to start fen and had. Anyway to make sure that everything was working I decided to reboot the new cluster node. After the reboot the cluster node came up and was visible in the Cluster Manager Java Console. The only problem was that my resource wasn’t able to run on that cluster node, because the new node wasn’t in the SystemList of that particular resource.
[root@node2 ~]# hastatus -sum

-- SYSTEM STATE
-- System               State                Frozen

A  node1              RUNNING              0
A  node2              RUNNING              0
A  node3              RUNNING              0
A  node4              RUNNING              0

-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State

B  ClusterService  node1              Y          N               OFFLINE
B  ClusterService  node2              Y          N               OFFLINE
B  ClusterService  node3              Y          N               ONLINE
B  ClusterService  node4              Y          N               OFFLINE
B  LIVEApp         node2              Y          N               ONLINE
B  LIVEApp         node3              Y          N               OFFLINE
B  LIVEApp         node4              Y          N               OFFLINE
B  vxfen           node1              Y          N               ONLINE
B  vxfen           node2              Y          N               ONLINE
B  vxfen           node3              Y          N               ONLINE
B  vxfen           node4              Y          N               ONLINE

I added the new node to the list.
[root@node2 ~]# hagrp -modify LIVEApp SystemList -add node1 4


I hope that this will help you to add a new node to an existing cluster without the problems I had. As mentioned before did this work for me and I am sure that this will work for other clusters too. Anyway, if anything goes wrong on your system, don’t blame me.

About Juergen Caris

I am 54yo, MSc(Dist) and BSc in Computer Science, German and working as a Senior Server Engineer for the NHS Lothian. I am responsible for the patient management system, called TrakCare. I am a UNIX/Linux guy, working in this sector for more than 20 years now. I am also interested in robotics, microprocessors, system monitoring, Home automation and programming.
This entry was posted in Linux, Veritas. Bookmark the permalink.

Leave a Reply

Your email address will not be published.