cancel
Showing results for 
Search instead for 
Did you mean: 

2 node VCS cluster upgrade help

infinitiguy
Level 4

 

Hi guys,

I'm in a nice position to upgrade a VCS cluster before it makes its way into production.  The cluster is installed on rhel5.3 5.0.3 MP3 and I'd like to upgrade it to 5.1_SP1.

Glancing through the upgrade guide, it doesn't look like I'd be able to do an online upgrade with 2 nodes?  Other posts (my own) advise that veritas doesn't work when the versions differ, and as soon as one node is upgraded... my versions would differ.  Is there any way to do an online upgrade?  If not - that's cool, just looking for confirmation :)

Second..  the upgrade guide says stop the application agents and resources (my second indication that an online upgrade can't be done).

For syntax.. I've never used these commands before.  For hares is it looking for the service group?

hares -offline Service_Group_Name -sys hostnameofmachinerunninggroup  (I'd imagine this wouldn't work on the standby host).

For haagent is it looking for the below 4 agents?  and then repeat the same on the 2nd host?

haagent -stop NICAgent -sys hostname1

haagent -stop StorageAgent -sys hostname1

haagent -stop IPAgent -sys hostname1

haagent -stop ApplicationAgent -sys hostname1

[root@linopstfg01 ~]# ps -ef | grep Agent

root      4013     1  0 Mar23 ?        00:00:41 /opt/VRTSobc/pal33/bin/vxpal -aStorageAgent-x
 
root      7367     1  0 Mar23 ?        00:00:28 /opt/VRTSvcs/bin/NIC/NICAgent -type NIC
root     15556 15312  0 17:01 pts/4    00:00:00 grep Agent
root     23268     1  0 Mar24 ?        00:00:08 /opt/VRTSvcs/bin/IP/IPAgent-type IP
 
root     23728     1  0 Mar24 ?        00:00:09 /opt/VRTSvcs/bin/Application/ApplicationAgent-type Application
 

 

I'm pulling these instructions from 

https://sort.symantec.com/public/documents/sf/5.1/linux/pdf/vcs_install.pdf   page 234

 

Is the above correct?  Do I have anything wrong?  

 

 
 
To prepare to upgrade to VCS 5.1 from VCS 4.x
1 Remove deprecated resources and modify attributes. The installer program
can erase obsolete types and resources can be erased from the system or you
can manually remove them.
See “Manually removing deprecated resource types and modifying attributes”
on page 365.
2 Stop the application agents that are installed on the VxVM disk (for example
the NBU agent).
Perform the following steps to stop the application agents:
■ Take the resources offline on all systems that you want to upgrade.
# hares -offline resname -sys sysname
Chapter 18■ Stop the application agents that are installed on VxVM disk on all the
systems.
# haagent -stop agentname -sys sysname
■ Ensure that the agent processes are not running.
# ps -ef | grep Agent
This command does not list any processes in the VxVM installation
directory
1 ACCEPTED SOLUTION

Accepted Solutions

Gaurav_S
Moderator
Moderator
   VIP    Certified

Hello,

GAB & fencing will not be shutdown by running hastop command. The reason I suggested you to shutdown Fencing & GAB on node B was following:

-- lets say you ran an installer on node B, (it should anyways shutdown vxfen & Gab) but still safer to do with our own hands. installer will need to upgrade these components as well, so Gab & Vxfen will be upgraded to 5.1 now once node B is ready to take up services after upgrade, it will not join even Gab membership with node A since there is a version difference now moreover you have active node A accessing disks at this point. I hope you get what I am trying to say here..

so to overcome, on node B, shutdown Fencing & Gab (/etc/init.d/vxfen stop &  /etc/init.d/gab stop). This will stop the vxfen & Gab. now you can go ahead & run the upgrade so that both these components are upgraded to 5.1.

Later on node A, you should repeat the same procedure as there also you need to upgrade both of these components in order to join memberships with node B.

Hope its clear..

 

Gaurav 

View solution in original post

12 REPLIES 12

Gaurav_S
Moderator
Moderator
   VIP    Certified

Hello,

In nutshell, yes, cluster will NOT be formed when versions differ, so you are right , when one node is upgraded it won't join the cluster with other (un-upgraded) node. So.. upgrade will be done in an offline method (one node at a time).  This procedure will need some outage .. here are steps in short:

Lets say your two nodes are A & B & it runs multiple service groups..

1. Switch/online all your service groups on node A, make sure no service group is online on node B

2. shutdown VCS on node B

3. Upgrade VCS on node B & make sure nothing starts on node B (you can do so by renaming cluster scripts)

4. Shutdown all service groups from node A (here your outage starts)

5. start all cluster services on node B i.e upgraded node, start all service groups here

6. Repeat the upgrade procedure on node A, once upgrade done, it should join cluster with node B automatically.

 

Now regarding your other questions:

-- Agents need not to be stopped individually, once you stop cluster (had process), it will stop all the agents.

-- Regarding resources, once you offline a service group, it will offline all the resources inside it, so no need to run hares individually.

 

Regarding the last part, it is applicable to you only if you are using any such agent which is obsolete from 5.1 onwards... so do you use any such agent ?  From ps -ef  output, I see all agents are still there in 5.1 (forget storageagent as it belongs to vxvm VEA GUI)

 

Hope its clear

 

Gaurav

mikebounds
Level 6
Partner Accredited

Note " /opt/VRTSobc/pal33/bin/vxpal -aStorageAgent-x" is not a VCS agent is is an agent used by Volume Manager, so you can't stop using haagent - this process is normally stopped as part of the SF upgrade.

 

You can actually upgrade VCS without an outage, by running "hastop -all -force" (which leaves applications running), unload VCS modules, upgrading VCS and starting new version of VCS, but to upgrade SF (VM + vxfs) you need an outage, but you can minimise outage using Gaurav's procedure above.

Mike

infinitiguy
Level 4

 

 

Hi,

I just tried to upgrade my 2nd node and got the following errors/warnings.

 

Logs are being written to /var/tmp/installer-201103291718eyK while installer is in progress
 
    Verifying systems: 100%                                                                                                      
 
    Estimated time remaining: 0:00                                                                                        8 of 8
 
    Checking system communication ......................................................................................... Done
    Checking release compatibility ........................................................................................ Done
    Checking installed product ............................................................................................ Done
    Checking prerequisite patches and rpms ................................................................................ Done
    Checking platform version ............................................................................................. Done
    Checking file space ................................................................................................... Done
    Performing product license checks ..................................................................................... Done
    Performing product prechecks .......................................................................................... Done
 
System verification did not complete successfully
 
The following errors were discovered on the systems:
 
The following required OS rpms were not found on linopstfg02.prod.domain.com:
        libacl-2.2.39-3.el5.i386
 
The following warnings were discovered on the systems:
 
Not all systems of the cluster, app_vcsvm_prod, have been entered to be upgraded. It is recommended that all cluster systems are
upgraded together, except when you plan to perform phased upgrade on the set of systems.
 
VCS is not running before upgrade. Please make sure all the configurations are valid before upgrade.
 
installer log files, summary file, and response file are saved at:
 
        /opt/VRTS/install/logs/installer-201103291718eyK
 
Now:
I assume the warning is fine because we're purposely leaving node1 online while we run the upgrade on node2 to minimize downtime.  Regarding the error, is this something that I just need to install manually, or is there some way that VCS can resolve that dependancy for me?
 

 

infinitiguy
Level 4

actually, I figured that out... should've tried before posting.

However I notice now that I'm trying a "phased" upgrade according to VCS..  the documentation tells me to disable fencing and all sorts of stuff from the upgrade guide.  Is this necessary (seems to imply services still running) as opposed to what we're doing with shutting off the services on the 2nd node.  It would seem to me that changing config files if services are shut off wouldn't really do much good.

Gaurav_S
Moderator
Moderator
   VIP    Certified

well since its a phased upgrade, that means your one node would be still accessing the data ... so changing fencing config won't be good.. even deleting keys won't be necessary. When you have upgraded your node B successfully, shutdown all apps on node A, shutdown cluster, shutdown fencing , GAB on node A & start all of these components on node B .... That should start Gab / fencing / HAD for 5.1 module

Hope its clear

Gaurav

infinitiguy
Level 4

should GAB/fencing be shut down on node B before doing the upgrade?  I guess what I'm unsure of is what services need to be offline on node B compared to what would need to be shut down on node A?

 

So far what I've done is..

move everything to node 1

hastop -sys hostname

 [root@linopstfg02 ~]# lltconfig
LLT is running
[root@linopstfg02 ~]# gabconfig -a
GAB Port Memberships
===============================================================
Port a gen   dbff03 membership 01
Port b gen   dbff06 membership 01

then I was going to start the installer and do the upgrade.  fencing is still running as well as anything else not stopped by the hastop.

Below I've pasted a ps -ef.  Forgive me if I've missed something, I'm not that familiar with VCS in terms of proper services to shut down.  I'm not sure if there would be issues with some components still running during the upgrade and components being upgraded while being accessed by the other node.  I want to make sure I do this right, and only once so that way I can replicate the exact same behavior on our current production nodes.

 

 

[root@linopstfg02 ~]# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 Mar23 ?        00:00:01 init [3]                                              
root         2     1  0 Mar23 ?        00:00:19 [migration/0]
root         3     1  0 Mar23 ?        00:00:00 [ksoftirqd/0]
root         4     1  0 Mar23 ?        00:00:00 [watchdog/0]
root         5     1  0 Mar23 ?        00:00:01 [migration/1]
root         6     1  0 Mar23 ?        00:00:00 [ksoftirqd/1]
root         7     1  0 Mar23 ?        00:00:00 [watchdog/1]
root         8     1  0 Mar23 ?        00:00:01 [migration/2]
root         9     1  0 Mar23 ?        00:00:00 [ksoftirqd/2]
root        10     1  0 Mar23 ?        00:00:00 [watchdog/2]
root        11     1  0 Mar23 ?        00:00:02 [migration/3]
root        12     1  0 Mar23 ?        00:00:00 [ksoftirqd/3]
root        13     1  0 Mar23 ?        00:00:00 [watchdog/3]
root        14     1  0 Mar23 ?        00:01:13 [events/0]
root        15     1  0 Mar23 ?        00:00:16 [events/1]
root        16     1  0 Mar23 ?        00:00:00 [events/2]
root        17     1  0 Mar23 ?        00:00:00 [events/3]
root        18     1  0 Mar23 ?        00:00:00 [khelper]
root       147     1  0 Mar23 ?        00:00:00 [kthread]
root       154   147  0 Mar23 ?        00:00:00 [kblockd/0]
root       155   147  0 Mar23 ?        00:00:00 [kblockd/1]
root       156   147  0 Mar23 ?        00:00:00 [kblockd/2]
root       157   147  0 Mar23 ?        00:00:00 [kblockd/3]
root       158   147  0 Mar23 ?        00:00:00 [kacpid]
root       217   147  0 Mar23 ?        00:00:00 [cqueue/0]
root       218   147  0 Mar23 ?        00:00:00 [cqueue/1]
root       219   147  0 Mar23 ?        00:00:00 [cqueue/2]
root       220   147  0 Mar23 ?        00:00:00 [cqueue/3]
root       223   147  0 Mar23 ?        00:00:00 [khubd]
root       225   147  0 Mar23 ?        00:00:00 [kseriod]
root       314   147  0 Mar23 ?        00:00:00 [pdflush]
root       315   147  0 Mar23 ?        00:00:00 [pdflush]
root       316   147  0 Mar23 ?        00:00:00 [kswapd0]
root       317   147  0 Mar23 ?        00:00:00 [aio/0]
root       318   147  0 Mar23 ?        00:00:00 [aio/1]
root       319   147  0 Mar23 ?        00:00:00 [aio/2]
root       320   147  0 Mar23 ?        00:00:00 [aio/3]
root       461   147  0 Mar23 ?        00:00:00 [kpsmoused]
root       527   147  0 Mar23 ?        00:00:00 [mpt_poll_0]
root       528   147  0 Mar23 ?        00:00:00 [scsi_eh_0]
root       534   147  0 Mar23 ?        00:00:00 [ata/0]
root       535   147  0 Mar23 ?        00:00:00 [ata/1]
root       536   147  0 Mar23 ?        00:00:00 [ata/2]
root       537   147  0 Mar23 ?        00:00:00 [ata/3]
root       538   147  0 Mar23 ?        00:00:00 [ata_aux]
root       554   147  0 Mar23 ?        00:00:00 [kstriped]
root       575   147  0 Mar23 ?        00:00:01 [kjournald]
root       602   147  0 Mar23 ?        00:00:00 [kauditd]
root       636     1  0 Mar23 ?        00:00:00 /sbin/udevd -d
root      1829   147  0 Mar23 ?        00:00:00 [kmpathd/0]
root      1830   147  0 Mar23 ?        00:00:00 [kmpathd/1]
root      1831   147  0 Mar23 ?        00:00:00 [kmpathd/2]
root      1832   147  0 Mar23 ?        00:00:00 [kmpathd/3]
root      1833   147  0 Mar23 ?        00:00:00 [kmpath_handlerd]
root      1868   147  0 Mar23 ?        00:00:00 [kjournald]
root      1870   147  0 Mar23 ?        00:00:03 [kjournald]
root      1872   147  0 Mar23 ?        00:00:01 [kjournald]
root      1874   147  0 Mar23 ?        00:00:00 [kjournald]
root      1876   147  0 Mar23 ?        00:00:00 [kjournald]
root      1987     1  0 Mar23 ?        00:00:00 [vxfs_thread]
root      2121   147  0 Mar23 ?        00:00:00 [ib_addr]
root      2137   147  0 Mar23 ?        00:00:00 [ib_mcast]
root      2138   147  0 Mar23 ?        00:00:00 [ib_inform]
root      2139   147  0 Mar23 ?        00:00:00 [local_sa]
root      2145   147  0 Mar23 ?        00:00:00 [iw_cm_wq]
root      2152   147  0 Mar23 ?        00:00:00 [ib_cm/0]
root      2153   147  0 Mar23 ?        00:00:00 [ib_cm/1]
root      2154   147  0 Mar23 ?        00:00:00 [ib_cm/2]
root      2155   147  0 Mar23 ?        00:00:00 [ib_cm/3]
root      2161   147  0 Mar23 ?        00:00:00 [rdma_cm]
root      2172     1  0 Mar23 ?        00:00:00 iscsid
root      2173     1  0 Mar23 ?        00:00:00 iscsid
root      2491     1  0 Mar23 ?        00:00:00 syslogd -m 0
root      2494     1  0 Mar23 ?        00:00:00 klogd -x
root      2511     1  0 Mar23 ?        00:00:39 irqbalance
root      2528   147  0 Mar23 ?        00:00:00 [scsi_eh_1]
root      2529   147  0 Mar23 ?        00:00:00 [scsi_wq_1]
rpc       2579     1  0 Mar23 ?        00:00:00 portmap
root      2631     1  0 Mar23 ?        00:00:00 rpc.statd
root      2743     1  0 Mar23 ?        00:00:00 [dmp_daemon]
root      2744     1  0 Mar23 ?        00:00:00 [dmp_daemon]
root      2745     1  0 Mar23 ?        00:00:00 [dmp_daemon]
root      2746     1  0 Mar23 ?        00:00:00 [dmp_daemon]
root      2747     1  0 Mar23 ?        00:00:00 [dmp_daemon]
root      2748     1  0 Mar23 ?        00:00:00 [dmp_daemon]
root      2749     1  0 Mar23 ?        00:00:00 [dmp_daemon]
root      2750     1  0 Mar23 ?        00:00:00 [dmp_daemon]
root      2751     1  0 Mar23 ?        00:00:00 [dmp_daemon]
root      2752     1  0 Mar23 ?        00:00:00 [dmp_daemon]
root      2772     1  0 Mar23 ?        00:00:00 [voltuned]
root      2773     1  0 Mar23 ?        00:00:00 [vxiod]
root      2774     1  0 Mar23 ?        00:00:00 [vxiod]
root      2775     1  0 Mar23 ?        00:00:00 [vxiod]
root      2776     1  0 Mar23 ?        00:00:00 [vxiod]
root      2777     1  0 Mar23 ?        00:00:00 [vxiod]
root      2778     1  0 Mar23 ?        00:00:00 [vxiod]
root      2779     1  0 Mar23 ?        00:00:00 [vxiod]
root      2780     1  0 Mar23 ?        00:00:00 [vxiod]
root      2781     1  0 Mar23 ?        00:00:00 [vxiod]
root      2782     1  0 Mar23 ?        00:00:00 [vxiod]
root      2783     1  0 Mar23 ?        00:00:00 [vxiod]
root      2784     1  0 Mar23 ?        00:00:00 [vxiod]
root      2785     1  0 Mar23 ?        00:00:00 [vxiod]
root      2786     1  0 Mar23 ?        00:00:00 [vxiod]
root      2787     1  0 Mar23 ?        00:00:00 [vxiod]
root      2788     1  0 Mar23 ?        00:00:00 [vxiod]
root      2851     1  0 Mar23 ?        00:00:01 vxconfigd -x syslog
root      3135     1  0 Mar23 ?        00:00:00 [vmmemctl]
root      3166     1  0 Mar23 ?        00:03:26 /usr/sbin/vmware-guestd --background /var/run/vmware-guestd.pid
root      3193     1  0 Mar23 ?        00:00:00 /opt/VRTSpbx/bin/pbx_exchange
root      3229   147  0 Mar23 ?        00:00:02 [rpciod/0]
root      3230   147  0 Mar23 ?        00:00:00 [rpciod/1]
root      3231   147  0 Mar23 ?        00:00:00 [rpciod/2]
root      3232   147  0 Mar23 ?        00:00:02 [rpciod/3]
root      3254     1  0 Mar23 ?        00:00:00 [lockd]
root      3265     1  0 Mar23 ?        00:00:00 [vxnetd]
root      3266     1  0 Mar23 ?        00:00:00 [vxnetd]
root      3315     1  0 Mar23 ?        00:00:54 automount
nscd      3340     1  0 Mar23 ?        00:02:49 /usr/sbin/nscd
root      3363     1  0 Mar23 ?        00:00:00 /usr/sbin/snmpd -Lsd -Lf /dev/null -p /var/run/snmpd.pid -a
root      3500     1  0 Mar23 ?        00:00:00 /sbin/vxesd
root      3510     1  0 Mar23 ?        00:00:00 /bin/sh - /usr/lib/vxvm/bin/vxrelocd root
root      3514     1  0 Mar23 ?        00:00:00 /bin/sh - /usr/lib/vxvm/bin/vxvvrsecdgd root
root      3515     1  0 Mar23 ?        00:00:00 /bin/sh - /usr/lib/vxvm/bin/vxconfigbackupd
root      3565  3510  0 Mar23 ?        00:00:00 vxnotify -f -w 15
root      3566  3510  0 Mar23 ?        00:00:00 /bin/sh - /usr/lib/vxvm/bin/vxrelocd root
root      3589  3515  0 Mar23 ?        00:00:00 vxnotify
root      3591  3515  0 Mar23 ?        00:00:00 /bin/sh - /usr/lib/vxvm/bin/vxconfigbackupd
root      3593  3514  0 Mar23 ?        00:00:00 /usr/sbin/vxnotify -x -w 15
root      3594  3514  0 Mar23 ?        00:00:00 /bin/sh - /usr/lib/vxvm/bin/vxvvrsecdgd root
root      3601     1  0 Mar23 ?        00:00:00 /usr/sbin/sshd
ntp       3633     1  0 Mar23 ?        00:00:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid
root      3686     1  0 Mar23 ?        00:00:03 [lltd]
root      3687     1  0 Mar23 ?        00:00:05 [lltd]
root      3688     1  0 Mar23 ?        00:00:08 [lltdlv]
root      3720  3601  0 20:27 ?        00:00:00 sshd: dmurphy [priv]
dmurphy   3724  3720  0 20:27 ?        00:00:00 sshd: dmurphy@pts/0
dmurphy   3725  3724  0 20:27 pts/0    00:00:00 -bash
root      3743  3601  0 20:27 ?        00:00:00 sshd: dmurphy [priv]
dmurphy   3747  3743  0 20:27 ?        00:00:00 sshd: dmurphy@pts/1
dmurphy   3748  3747  0 20:27 pts/1    00:00:00 -bash
root      3766  3725  0 20:27 pts/0    00:00:00 su -
root      3767  3766  0 20:27 pts/0    00:00:00 -bash
root      3791  3767  0 20:27 pts/0    00:00:00 bash
root      3807     1  0 Mar23 ?        00:00:09 [lltdlv]
root      3922     1  0 Mar23 ?        00:00:00 [vxfen]
root      3923     1  0 Mar23 ?        00:00:00 [vxfen]
root      3924     1  0 Mar23 ?        00:00:08 [lltdlv]
root      3950     1  0 Mar23 ?        00:00:34 /opt/VRTSobc/pal33/bin/vxpal -a StorageAgent -x
root      3979  4127  0 20:30 ?        00:00:00 sleep 300
root      4004     1  0 Mar23 ?        00:00:00 /opt/VRTSdcli/xprtl/bin/xprtld /opt/VRTSdcli/xprtl/etc/xprtld.conf
root      4021     1  0 Mar23 ?        00:00:00 /usr/sbin/vxdclid
root      4045     1  0 Mar23 ?        00:00:01 crond
root      4076     1  0 Mar23 ?        00:00:27 /usr/sbin/vradmind
root      4127     1  0 Mar23 ?        00:01:00 /bin/sh - /etc/vx/vras/templates/vvr_stats
root      4143     1  0 Mar23 ?        00:00:00 /usr/sbin/in.vxrsyncd
root      4154  3791  0 20:34 pts/0    00:00:00 ps -ef
root      4161     1  0 Mar23 ?        00:00:25 cfenvd
root      4199     1  0 Mar23 ?        00:00:00 /opt/VRTSvcs/bin/CmdServer
root      4202     1  0 Mar23 tty1     00:00:00 /sbin/mingetty tty1
root      4203     1  0 Mar23 tty2     00:00:00 /sbin/mingetty tty2
root      4204     1  0 Mar23 tty3     00:00:00 /sbin/mingetty tty3
root      4205     1  0 Mar23 tty4     00:00:00 /sbin/mingetty tty4
root      4209     1  0 Mar23 tty5     00:00:00 /sbin/mingetty tty5
root      4210     1  0 Mar23 tty6     00:00:00 /sbin/mingetty tty6
root      4250     1  0 Mar23 ?        00:00:09 [lltdlv]

Gaurav_S
Moderator
Moderator
   VIP    Certified

Hello,

GAB & fencing will not be shutdown by running hastop command. The reason I suggested you to shutdown Fencing & GAB on node B was following:

-- lets say you ran an installer on node B, (it should anyways shutdown vxfen & Gab) but still safer to do with our own hands. installer will need to upgrade these components as well, so Gab & Vxfen will be upgraded to 5.1 now once node B is ready to take up services after upgrade, it will not join even Gab membership with node A since there is a version difference now moreover you have active node A accessing disks at this point. I hope you get what I am trying to say here..

so to overcome, on node B, shutdown Fencing & Gab (/etc/init.d/vxfen stop &  /etc/init.d/gab stop). This will stop the vxfen & Gab. now you can go ahead & run the upgrade so that both these components are upgraded to 5.1.

Later on node A, you should repeat the same procedure as there also you need to upgrade both of these components in order to join memberships with node B.

Hope its clear..

 

Gaurav 

infinitiguy
Level 4

is there a way to override the admin account password that was originally set with the cluster?  after upgrading the clusters it doesn't look like the account works anymore.

infinitiguy
Level 4

sorry.. little more data might be useful.

I installed the Java cluster manager (hagui) to try accessing.  Everything seems to be online on both boxes.

 

 

 
[root@linopstfg01 collabnet]# vxfenadm -d
 
I/O Fencing Cluster Information:
================================
 
 Fencing Protocol Version: 201
 Fencing Mode: SCSI3
 Fencing SCSI3 Disk Policy: dmp
 Cluster Members:  
 
        * 0 (linopstfg01.prod.domain.com)
          1 (linopstfg02.prod.domain.com)
 
 RFSM State Information:
        node   0 in state  8 (running)
        node   1 in state  8 (running)
 
[root@linopstfg01 collabnet]# hastatus -sum
 
-- SYSTEM STATE
-- System               State                Frozen              
 
A  linopstfg01.prod.domain.com RUNNING              0                    
A  linopstfg02.prod.domain.com RUNNING              0                    
 
[root@linopstfg01 collabnet]# lltconfig
LLT is running
[root@linopstfg01 collabnet]# gabconfig -a
GAB Port Memberships
===============================================================
Port a gen   c3ba02 membership 01
Port b gen   c3ba01 membership 01
Port h gen   c3ba04 membership 01
[root@linopstfg01 collabnet]# 

infinitiguy
Level 4

I found that the upgrade seemed to move configs around.. my main.cf was very small and included none of my old service groups (yay for backups).

 

Moved a backup but that didn't seem to help..  I found a post mentioning..

 

Login as root, and run the hauser command to add the admin user again:

/opt/VRTSvcs/bin/hauser -add admin -priv Administrator

 

So I might try that.

Gaurav_S
Moderator
Moderator
   VIP    Certified

If you still have a question left that I can move this to a new discussion since this seems to be a different issue than the one for which this thread was started.... let me know if this works else I will move this to different thread.

 

G

infinitiguy
Level 4

yep, different question - looks like my main.cf is being overwritten with a blank vcs cluster config.  This can be moved to a different thread - or I can start it myself.  Let me know which you'd prefer.