Veritas Storage Foundation - Volume Disabled After 'rmdisk'
Dear All, I added a LUN to a specific volume and realised that I added it to the wrong volume. To remove the LUN the following command was run : "vxdg -g dg rmdisk vsp-xx-xx" I was then prompted to run the " -k" option to remove the disk. However after re-running the command with the " -k" option : "vxdg -g dg -k rmdisk vsp-xx-xx" ... the volume went into a disabled state. Fortunately no data was lost once the "vxmend" was completed on the volume. I would just like to know if this was to be expected when running the above with the " -k" option ? RegardsSolved2KViews0likes4CommentsMigrating to a new SAN
We're in the process of moving to new datacenters. All servers will be moved over but the SAN won't. The SAN will be replicated to a new SAN in the new datacenters by our SAN admins. That means, the LUNs in the new SAN will be identical to the old, also the LUN numbers. Example output from'vxdisk list' on the current host shows: c1t50050768014052E2d129s2 auto:cdsdisk somedisk01 somedg online This disk will get same LUN nr, but the target name will probably differ as it's new hardware in the new DC. Will this work with Veritas SFHA? If not, is there any way to do this the way the datacenter project wants to? If I could choose to do this my way, I would presentthe LUNs on the new SANto my servers so that I could do a normal host based migration. Add the new LUN to the diskgroup and mirror the data, then remove the old LUN. However, I'm told that the hosts in the current datacenter won't be able to see the new SAN.Solved2.2KViews2likes3CommentsStorage Foundation (Storage Checkpoints on Oracle)
Hello Experts (if you're out there) I'm running Oracle 11gR2 on Rhel 6.2 with SFHA 6.0.x and have configured the SFDB and can take checkpoints and restore them. But this doens't really mean my database is up and running again. Once you restore from your checkpoint you get a nice little note that say you might want to use the control file located at /var/tmp/XXXXXXX/control01.ctl to recover your database. And this is where it all goes very pear shaped. I don't think the documents tell us enough about what is really going on when you run the restore. Secondly, combined with a very limited understanding of Oracle recovery concepts its quite difficult to actually recover the database. What is happening behind the curtains. Anybody with any information please share it514Views0likes3CommentsRequired some inputs for VCS functions...
As we are in phase to built a DR project for Enterprise Vault.We have some quries for the same... 1) Once the data is replicated to DR, is there any way in VCS to check for integrity of the replicated data? 2) Copying vault store partitions to new storage would be a time consuming task since the size of the partitions and the number of files is huge. What would be the best practice method recommended by Symantec for copying this data in order to improve efficiency and also minimize risk of data loss. 3) Once the solution is deployed, is it possible to revert back to windows format storage from Symantec format with minimal data loss? Regards VVVSolved1.1KViews8likes4CommentsSome VCS plexes disabled after power outage
Hi all, I'm running the Veritas Storage Foundation Standard HA 5.0MP3 under Suse Linux Enterprise Server 11 on two Oracle X6270 servers. There was a power outage, causing an immediate brutal shutdown of both servers. After power was restored, the server on which the Oracle service group was active ("db1-hasc") could not boot at all (mainboard failure). The other server ("db2-hasc") booted, but reported during boot that cluster cannot start, and that manual reseeding might be needed to start it, so I started the cluster from the working server db2-hasc using command gabconfig -x (found it after some googling). In the meantime, the failed server db1-hasc was fixed and cluster is now working (all service groups online, but on the db2-hasc, the one which started successfully after power outage). No attempt has been made yet to try to switchover any of the service groups (except network service groups which are online) to the db1-hasc server. However, I have noticed some problems with several volumes in disk group "oracledg": db2-hasc# vxprint -g oracledg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg oracledg oracledg - - - - - - dm oracled01 - - - - NODEVICE - - dm oracled02 sdb - 335462144 - - - - v archvol fsgen ENABLED 62914560 - ACTIVE - - pl archvol-01 archvol DISABLED 62914560 - NODEVICE - - sd oracled01-02 archvol-01 DISABLED 62914560 0 NODEVICE - - pl archvol-02 archvol ENABLED 62914560 - ACTIVE - - sd oracled02-02 archvol-02 ENABLED 62914560 0 - - - v backupvol fsgen ENABLED 167772160 - ACTIVE - - pl backupvol-01 backupvol DISABLED 167772160 - NODEVICE - - sd oracled01-03 backupvol-01 DISABLED 167772160 0 NODEVICE - - pl backupvol-02 backupvol ENABLED 167772160 - ACTIVE - - sd oracled02-03 backupvol-02 ENABLED 167772160 0 - - - v dbovol fsgen ENABLED 62914560 - ACTIVE - - pl dbovol-01 dbovol DISABLED 62914560 - NODEVICE - - sd oracled01-01 dbovol-01 DISABLED 62914560 0 NODEVICE - - pl dbovol-02 dbovol ENABLED 62914560 - ACTIVE - - sd oracled02-01 dbovol-02 ENABLED 62914560 0 - - - db2-hasc# vxprint -htg oracledg DG NAME NCONFIG NLOG MINORS GROUP-ID ST NAME STATE DM_CNT SPARE_CNT APPVOL_CNT DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK CO NAME CACHEVOL KSTATE STATE VT NAME RVG KSTATE STATE NVOLUME V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE DC NAME PARENTVOL LOGVOL SP NAME SNAPVOL DCO EX NAME ASSOC VC PERMS MODE STATE SR NAME KSTATE dg oracledg default default 0 1265259474.12.db1-HASc dm oracled01 - - - - NODEVICE dm oracled02 sdb auto 65536 335462144 - v archvol - ENABLED ACTIVE 62914560 SELECT - fsgen pl archvol-01 archvol DISABLED NODEVICE 62914560 CONCAT - WO sd oracled01-02 archvol-01 oracled01 62914560 62914560 0 - NDEV pl archvol-02 archvol ENABLED ACTIVE 62914560 CONCAT - RW sd oracled02-02 archvol-02 oracled02 62914560 62914560 0 sdb ENA v backupvol - ENABLED ACTIVE 167772160 SELECT - fsgen pl backupvol-01 backupvol DISABLED NODEVICE 167772160 CONCAT - WO sd oracled01-03 backupvol-01 oracled01 125829120 167772160 0 - NDEV pl backupvol-02 backupvol ENABLED ACTIVE 167772160 CONCAT - RW sd oracled02-03 backupvol-02 oracled02 125829120 167772160 0 sdb ENA v dbovol - ENABLED ACTIVE 62914560 SELECT - fsgen pl dbovol-01 dbovol DISABLED NODEVICE 62914560 CONCAT - WO sd oracled01-01 dbovol-01 oracled01 0 62914560 0 - NDEV pl dbovol-02 dbovol ENABLED ACTIVE 62914560 CONCAT - RW sd oracled02-01 dbovol-02 oracled02 0 62914560 0 sdb ENA Does anyone have some ideas on how to recover the disabled plexes/subdisks? Or which other commands to run to ascertain the current state of the cluster, in order to have a clear(er) picture what is wrong and which steps to take to remedy the problem? If so, I would appreciate if you can share any tips/suggestions. The physical disks seem fine (no errors reported in ILOM diagnostics). Thanks, /HrvojeSolved2.2KViews4likes9Commentsurgent solutin needed
I am executing below command through script to restore volumes ssh -p 22 root@172.19.18.134 dd if=/dev/rmt/0n ibs=4096b | (cd /mnt; vxrestore -c -r -b 4096 -f -) after restoring thre or four volimes, its ending with below message ===> Verifying ossdg/JUMP 2013-05-09 09:15:48 Positioning tape at block 3 Creating ossdg/JUMP_verify (12288 MB) New vxfs FS on JUMP_verify vxrestore -c JUMP_verify Using vxrestore to receive root@172.19.18.134:/dev/rmt/0n to /mnt UX:vxfs vxrestore: ERROR: V-3-20068: cannot open /dev/tty: No such device or address and next volume restore doesnot start I have tried couple of times and it gave the same error for different volumes each time and it stucks randomly please suggest891Views0likes6CommentsThe vxdclid daemon core dumps on AIX 7.1 hosts with Storage Foundation 4.0 MP4
Hi, please send me solutions on the following topics. The vxdclid daemon core dumps on AIX hosts with Storage Foundation 4.0 MP4 This is the output: LABEL: CORE_DUMP IDENTIFIER: A924A5FC Date/Time: Tue Feb 5 06:38:24 USAST 2013 Sequence Number: 14290551 Machine Id: 00C8468E4C00 Node Id: edmerpr2 Class: S Type: PERM WPAR: Global Resource Name: SYSPROC Description SOFTWARE PROGRAM ABNORMALLY TERMINATED Probable Causes SOFTWARE PROGRAM User Causes USER GENERATED SIGNAL Recommended Actions CORRECT THEN RETRY Failure Causes SOFTWARE PROGRAM Recommended Actions RERUN THE APPLICATION PROGRAM IF PROBLEM PERSISTS THEN DO THE FOLLOWING CONTACT APPROPRIATE SERVICE REPRESENTATIVE Detail Data SIGNAL NUMBER 11 USER'S PROCESS ID: 34080660 FILE SYSTEM SERIAL NUMBER 4 INODE NUMBER 501817 CORE FILE NAME /var/opt/VRTSsfmh/core PROGRAM NAME vxdclid LABEL: CORE_DUMP IDENTIFIER: A924A5FC Date/Time: Tue Feb 5 06:38:24 USAST 2013 Sequence Number: 14290551 Machine Id: 00C8468E4C00 Node Id: edmerpr2 Class: S Type: PERM WPAR: Global Resource Name: SYSPROC Description SOFTWARE PROGRAM ABNORMALLY TERMINATED Probable Causes SOFTWARE PROGRAM User Causes USER GENERATED SIGNAL Recommended Actions CORRECT THEN RETRY Failure Causes SOFTWARE PROGRAM Recommended Actions RERUN THE APPLICATION PROGRAM IF PROBLEM PERSISTS THEN DO THE FOLLOWING CONTACT APPROPRIATE SERVICE REPRESENTATIVE Detail Data SIGNAL NUMBER 11 USER'S PROCESS ID: 34080660 FILE SYSTEM SERIAL NUMBER 4 INODE NUMBER 501817 CORE FILE NAME /var/opt/VRTSsfmh/core PROGRAM NAME vxdclidSolved851Views0likes1CommentVeritas Volume Manager 4_DG restoration
Hi I have a node which is SUN E2900 where SUN storage 6130 is connected.Veritas volume manager was configured in that node.Server's internal hard diskswere under VXVM.Apart from that other DG were created by using the storage disk.Now thing is that the server got completely crashed & I need to reconfigure the same from scratch.Please note that I have the explorer out put of the server.Now my question is that can I able to import the Dgs from the storage.Kindly suggest on this. Thanks in advance. regards...Arup881Views0likes8Commentsrootdisk is failing and rootmirror is out of sync
Hi all, I have encapsulated my rootdisk and rootmirror under vxvm long time ago, now the root disk is shown as failing when I execute "vxdisk list" and also I discovered that after my last reboot 3 weeks ago , the rootvol-02 (rootmirror disk) is out of sync, so my system is running on rootvol having only one stripe available, the other stripe is out of sync. I have alrady purchased a replacement for my root disk, my question is how to replace the disk knowing that it is encapsulated under VxVM. I have the headlines in my mind but I am not sure about the commands. can someone help me please? my environment contains the following boxes: 1- Solaris 10 + SF 5.0 2- EMC CX120 array + multipathing 3- old ufs backup using ufsdump is available for the root file system many thanks.Solved602Views0likes4Commentsvxconfigbackup warning
Whenever I try to backup the configuration, warning is shown as below. Also when i try to restore it, the operation fails. /etc/vx/bin/vxconfigbackup -l /opt/configbackup Start backing up diskgroup abcdg to /opt/configbackup/abcdg.1294657147.185.sys1_01 ... VxVM vxconfigbackup WARNINIG V-5-2-3608 On disk diskgroup configuration for diskgroup abcdg is invalid, please check this dg VxVM NOTICE V-5-2-3100 Backup complete for diskgroup: abcdg ls /opt/configbackup: abcdg.1294657147.185.sys1_011.4KViews0likes14Comments