vxlicrep ERROR V-21-3-1015 Failed to prepare report for key
Dear all, we got a INFOSCALE FOUNDATION LNX 1 CORE ONPREMISE STANDARD PERPETUAL LICENSE CORPORATE. I have installed key using the vxlicinst -k <key> command. But when I want to check it using vxlicrep I'm getting this error for the given key: vxlicrep ERROR V-21-3-1015 Failed to prepare report for key = <key> We have Veritas Volume Manager 5.1 (VRTSvxvm-5.1.100.000-SP1_RHEL5 and VRTSvlic-3.02.51.010-0) running on RHEL 5.7 on 64 bits. I've read that the next step is to run vxkeyless set NONE, but I'm afraid to run this until I cannot see the license reported correctly by vxlicrep. What can I do to fix it? Thank you in advance. Kind regards, Laszlo4.2KViews0likes7CommentsInfoScale 7.1 and large disks (8Tb) with FSS
Hi everyone, I had been successfully running FSS with (thin) 8Tb disk drives on SFCFSHA 6.1 and 6.2.1 (see:http://vcojot.blogspot.ca/2015/01/storage-foundation-ha-61-and-flexible.html) I am trying to reproduce the same kind of setup with InfoScale 7.1 and it seems to have issues with 8Tb drives. Here's the full setup description: 2 * RHEL6.8 hosts with 16gb RAM. 4 LSI virtual adapters, each with 15 drives. c0* and c1* have 2Tb drives. c2* and c3* have 8Tb drives. Both 2tb and 8tb drives are 'exported' and the cluster is stable. Here's what I noticed.. Creating an FSS DG works on 2tb drives but not on 8tb drives (it used to on 6.1 and 6.2.1): [root@vcs18 ~]# /usr/sbin/vxdg -s -o fss=on init FSS00dg ssd_2T_00 [root@vcs18 ~]# vxdg list FSS00dg Group: FSS00dg dgid: 1466522672.427.vcs18 import-id: 33792.426 flags: shared cds version: 220 alignment: 8192 (bytes) local-activation: shared-write cluster-actv-modes: vcs18=sw vcs19=sw ssb: on autotagging: on detach-policy: local dg-fail-policy: obsolete ioship: on fss: on storage-sources: vcs18 copies: nconfig=default nlog=default config: seqno=0.1027 permlen=51360 free=51357 templen=2 loglen=4096 config disk ssd_2T_00 copy 1 len=51360 state=clean online log disk ssd_2T_00 copy 1 len=4096 On the 8Tb drives, it fails with: [root@vcs18 ~]# vxdg destroy FSS00dg [root@vcs18 ~]# /usr/sbin/vxdg -s -o fss=on init FSS00dg ssd_8T_00 VxVM vxdg ERROR V-5-1-585 Disk group FSS00dg: cannot create: Record not in disk group One thing that I noticed is that the 8Tb drives, even though exported, do -not- show up on the remote machine: [root@vcs18 ~]# vxdisk list|grep _00 ssd_2T_00 auto:cdsdisk - - online exported ssd_2T_00_1 auto:cdsdisk - - online remote ssd_8T_00 auto:cdsdisk - - online exported One other thing to note is that the 'connectivity' seems a bit messed up on the 8Tb drives: [root@vcs18 ~]# vxdisk list ssd_2T_00|grep conn connectivity: vcs18 [root@vcs18 ~]# vxdisk list ssd_2T_00_1|grep conn connectivity: vcs19 [root@vcs18 ~]# vxdisk list ssd_8T_00|grep conn connectivity: vcs18 vcs19 That's (IMHO) an error since those 'virtual'drives are local to each of the nodes and the SCSI busses aren't shared vcs18 and vcs19 are two fully independent VMWare machines. This looks like a bug to me but since I don't work for a company with a vx software support contrat anymore I cannot report the issue. Thanks for reading, Vincent3KViews0likes7CommentsHaving issues attempting to reduce the size of a disk group.
I am attempting to reduce the size of my disk group as an excersize and unable to do so. I believe the issue to be that my plex is in a DISABLED REMOVED state and my volume is in a DISABLED ACTIVE state. Doing research I did not do a vxresize before doing a vxdg rmdisk and now trying to get it back and I keep seeing different suggestions but nothing exactly on a DISABLED REMOVED state. Any assistance would be appreciated... bash-3.2# vxprint -g dg_acpdev TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg dg_acpdev dg_acpdev - - - - - - dm dg_acpdev_d01 emc1_3ac1 - 35281408 - - - - dm dg_acpdev_d02 - - - - REMOVED - - dm dg_acpdev_d03 emc1_3a7c - 70640128 - - - - dm dg_acpdev_d04 - - - - REMOVED - - dm dg_acpdev_d05 emc1_3860 - 35281408 - - - - pl dgacpdev-01 - DISABLED 141201408 - REMOVED - - sd dg_acpdev_d01-01 dgacpdev-01 ENABLED 35281408 0 - - - sd dg_acpdev_d03-01 dgacpdev-01 ENABLED 70640128 35281408 - - - sd dg_acpdev_d04-01 dgacpdev-01 DISABLED 35279872 105921536 REMOVED - - v dgacpdev fsgen DISABLED 141201408 - ACTIVE - -1.8KViews0likes6CommentsTesting the process to expand a disk group, need asistance
Hello, I am fairly new to this Veritas storage world here, so please be patient with me... I was tasked to expand vx disk groups from 1 trb to 2 trb on a Prod, Test, Dev server before a new client comes on board. I created a new disk group to test the process in order to get it clean before I attacked the real deal. The new disk group is dg_acpdev which has 2 disks part of it, but I can't get the maxsize nor can I get these 2 to look at one disk...I have searched online and have seen several resize commands but none of them work. bash-3.2# vxprint -g dg_acpdev TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg dg_acpdev dg_acpdev - - - - - - dm dg_acpdev_d01 emc1_3ac1 - 35281408 - - - - dm dg_acpdev_d02 emc1_3a7c - 70640128 - - - - v dgacpdev fsgen ENABLED 35280896 - ACTIVE - - pl dgacpdev-01 dgacpdev ENABLED 35280896 - ACTIVE - - sd dg_acpdev_d01-01 dgacpdev-01 ENABLED 35280896 0 - - - v volspec fsgen ENABLED 62914560 - ACTIVE - - pl volspec-01 volspec ENABLED 62914560 - ACTIVE - - sd dg_acpdev_d02-01 volspec-01 ENABLED 62914560 0 - - -Solved1.4KViews0likes2CommentsNeed to download Veritas Storage foundation suite for linux
Hello, I have been trying to install Veritas Storage Foundation HA suite on my Linux VMs. The tar ball which I downloaded fromhttps://sort.veritas.com/agentshas the RPMS but it does not give a menu based installation wizard, which resolves the dependancies during the installation process. Installing the RPMs individually is a backbreaking task! Would somebody please provide me the right URL, name of the package that I should be downloading for RHEL6 x86-64? Any help would be much appreciated. Thank you. Regards, SeanSolved4.2KViews0likes7CommentsSFCFSHA 6.2 : DG in unknown status
Hi all, On a 2 nodes system I did a fresh installation of SFCFSHA 6.2. There's an IBM array with shared Luns. There's only one path. Linux are 6.7 64bit built identically. After installation, llt failed to start. So I installed also the MR and patch : sfha-rhel6_x86_64-MR-6.2.1 sfha-rhel6.7_x86_64-Patch-6.2.1.100 Then llt is starting, the cluster is up and I finished to configure it with disk fencing. Good. But as soon as I add a DG and the FS (only one on this dg) resources in VCS, the DG appears with a question mark and the "unknown status". And I see this error message coming repeatedly : 2016/02/18 15:29:22 VCS ERROR V-16-2-13040 (mynode1) Resource(cvmvoldg1): Program(/opt/VRTSvcs/bin/CVMVolDg/monitor) was abnormally terminated with the exit code(0x8b). Following a post about this error I verified the files in /etc/VRTSvcs/conf/types.cf and /etc/VRTSvcs/conf/config/types.cf and copied the most recent one on the other (stopping cluster, copying files, starting cluster). No positive results. Trying to start the resource group lead to a message saying the group cannot be starting since it had probes pending on both nodes. I also tried with one node with no more success. Any advices ? Here's some informations : # vxdg list NAME STATE ID dg_data1 enabled,shared,cds 1455640017.22.mynode1 # cfsdgadm display Node Name : mynode2 DISK GROUP ACTIVATION MODE dg_data1 sw Node Name : mynode1 DISK GROUP ACTIVATION MODE dg_data1 sw # cfsmntadm display Cluster Configuration for Node: mynode2 MOUNT POINT TYPE SHARED VOLUME DISK GROUP STATUS /database/dg_data1 Regular v_dg_data1 dg_data1 NOT MOUNTED Cluster Configuration for Node: mynode1 MOUNT POINT TYPE SHARED VOLUME DISK GROUP STATUS /database/dg_data1 Regular v_dg_data1 dg_data1 NOT MOUNTED ThanksSolved1.9KViews0likes4CommentsVeritas Volume migration onto new Disk
Hello all , Can anyone pleasesuggest how to migrate these volumes to new disk . The new DM disk are emc1_1660 & emc2_1661. I am looking for options to create mirror first with the new disk & then disassociate/remove the old plex. dg dg_pqata01m_mqm default default 25000 dm emc1_1678 emc1_1678 auto 65536 844599776 - dm emc2_1d9a emc2_1d9a auto 65536 844599776 - dm emc1_1660 emc1_1660 auto 65536 844599776 - dm emc2_1661 emc2_1661 auto 65536 844599776 - v vol_log_pqata01m - ENABLED ACTIVE 41943040 SELECT - fsgen pl vol_log_pqata01m-01 vol_log_pqata01m ENABLED ACTIVE 41943040 CONCAT - RW sd emc1_1678-02 vol_log_pqata01m-01 emc1_1678 796917760 41943040 0 emc1_1678 ENA pl vol_log_pqata01m-02 vol_log_pqata01m ENABLED ACTIVE 41943040 CONCAT - RW sd emc2_1d9a-01 vol_log_pqata01m-02 emc2_1d9a 0 41943040 0 emc2_1d9a ENA dc vol_log_pqata01m_dco vol_log_pqata01m vol_log_pqata01m_dcl v vol_log_pqata01m_dcl - ENABLED ACTIVE 67968 SELECT - gen pl vol_log_pqata01m_dcl-01 vol_log_pqata01m_dcl ENABLED ACTIVE 67968 CONCAT - RW sd emc2_1d9a-02 vol_log_pqata01m_dcl-01 emc2_1d9a 41943040 67968 0 emc2_1d9a ENA pl vol_log_pqata01m_dcl-02 vol_log_pqata01m_dcl ENABLED ACTIVE 67968 CONCAT - RW sd emc1_1678-04 vol_log_pqata01m_dcl-02 emc1_1678 838955520 67968 0 emc1_1678 ENA994Views2likes3CommentsCalculate vxfs millisecond delay
We have a Netbackup master server cluster, the catalog/DB is sitting on a vxfs volume. There is a case open currently with Veritas and they have found massive delays (40 seconds!) at backup peak timeon the Sybase database and have recommended we do a "nbdb_unload -rebuild" and "nbdb_admin -reorganize". Meanwhile, we are looking at the IO on the catalog/DB volume. The technote below says the response shouldn't be more than 20ms. How do I determine the current response time, is there a Storage Foundation tool that can provide this? Our host is running RHEL6.6.Solved1.3KViews0likes2CommentsImport disk group failure
Hello everyone! When I finished disk group configuration, I cannot find disk group until I imported by manual,butafter rebooting server,Icouldn't findthe disk group only if I imported again.following are my operations.Also the RVG was DISABLED until I started it,but it's still DISABLED after rebooting.Any help and suggestion would be appreciate [root@u31_host Desktop]# vxdisk list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk - - online sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online [root@u31_host Desktop]# vxdg list NAME STATE ID [root@u31_host Desktop]# cd /dev/vx/dsk/ [root@u31_host dsk]# ls [root@u31_host dsk]# vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk - (netnumendg) online sdc auto:cdsdisk - (netnumendg) online sdd auto:cdsdisk - (netnumendg) online [root@u31_host dsk]# vxdg import netnumendg [root@u31_host dsk]# vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk netnumendg01 netnumendg online sdc auto:cdsdisk netnumendg02 netnumendg online sdd auto:cdsdisk netnumendg03 netnumendg online [root@u31_host dsk]# vxprint -rt |grep ^rv rv netnumenrvg 1 DISABLED CLEAN primary 2 srl_vol [root@u31_host dsk]# vxrvg -g netnumendg start netnumenrvg [root@u31_host dsk]# vxprint -rt |grep ^rv rv netnumenrvg 1 ENABLED ACTIVE primary 2 srl_vol After reboot the server [root@u31_host Desktop]# vxprint [root@u31_host Desktop]# vxdg list NAME STATE ID [root@u31_host Desktop]# vxdisk list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk - - online sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online [root@u31_host Desktop]# vxprint -rt |grep ^rv [root@u31_host Desktop]# cd /dev/vx/dsk/ [root@u31_host dsk]# ls [root@u31_host dsk]# vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk - (netnumendg) online sdc auto:cdsdisk - (netnumendg) online sdd auto:cdsdisk - (netnumendg) onlineSolved1KViews1like1CommentAfter vxvmconvert
Before running vxvmconvert, there is this pvscan PV /dev/sdb1 VG VG_DATA lvm2 [150.00 GiB / 50.00 GiB free] After running vxvmconvert the above is NOT reported by pvscan, as expected. This is reported: vxdisk list DEVICE TYPE DISK GROUP STATUS sda auto:LVM - - online invalid sdb auto:none - - online invalid sdb1 simple VG_DATA01 VG_DATA online My question is: Are sdb and sdb1 now different disks so that sdb can be initialized as a VxVM disk?1.4KViews0likes0Comments