VxVM 4.1 Storage Migration - From 38G disks to 33GB disks, how to do?
Can someone point me in the right direction? Additional info: Volumes are striped, using both filesystem and raw (oracle Database). Can also reconfigure the destination disk to be 66GB. Presenting approx 1/2 the number of disks. My OS admin says I need to provide theexact number of drives, which need to be the same size of larger.I disagree and need to provide the process to them. I dont want to overprovision if I use larger disk sizes. Thanks! Bruce.428Views0likes2Commentsrhel 5.6 and vrts 5.1
Hi, I did last weekendyum updates on 7 RHEL ( Enterprise Server) 2.6.18-238.12.1.el5 to version 2.6.18-308.8.2.el5 . All of this servers has Veritas 5.6 and Veritas Cluster 5.1 on top. The problem we run in was on the vrts level. For example , a simple vxdisk list command takes 5 to 10 minutes to come with his output. From the Symantec SORT site , we've found the patches for the version 5.6 and installed this. The patch VRTSvxvm-5.1.132.300-SP1RP2P3_RHEL5; VRTSvxfen-5.1.132.200-SP1RP2P2_RHEL5; VRTSvxfs-5.1.101.000-RP1_RHEL5. But with no results. At the END of the night, we've booted the RHEL servers with the Previous kernel 2.6.18-238.12.1.el5 . Than, the Veritas disk commands runs as before . Is the kernel in the Compatibility Matrix or has someone a brilliant idea? (I need a solutionis more asI would like to have a solution(°_°)) Regards Ivo908Views0likes4CommentsArray Migration using VxVM on Solaris VCS cluster
Hi, We lost our Unix admin a few months ago who usually administered VxVM for us and now I'm in a position where I need to migrate volumes between arrays. Unfortunately no documentation of how this was successfully achived in the past was taken so I'm looking for some help. I've seen a number of posts that relate to this but am posting a series of questions again as I'm new to veritas. The cluster is: - Solaris 9 - VCS 5.0 and VxVM 5.0 MP1 two node straetched cluster - Each node has its own storage array and zoning to the EVA and DMX in each data centre - Qlogic HBAs and Native Sun Driver - Current Array: HP EVA - Target Array: EMC DMX - Current SAN: Brocade (HP-badged) - Target SAN: Brocade Migration Plan (with loads of questions) is: - EMC PowerPath has been installed for multipathing on DMX a few weeks back - Freeze cluster in main data centre - this node to be used for migration - Take first channel out of the current SAN fabric1 and plug it into new SAN fabric 1 in main data centre on active, frozen node. - Leave both channels from standby node in 2nd data centre in EVA fabrics for now - Zone and Mask target LUNs from data cetnre 1 and 2 on single HBA in SAN fabric 1. - Discover LUNs (cfgadm) - DMX storage managed by PowerPath so list devices using powermt display dev=all to map devices to actual array/LUN - Initialise disk in VxVM (vxdisksetup -i emcpower56) - repeat for all new LUNs - Add DMX LUNs to disk groups (vxdg -g testdg adddisk testdgdmx=emcpower56) - repeat for all new LUNs - Add plexes and mirror (vxassist -g testdg mirror testvol emcpower56) The existing volumes have two plexes, one from each data centre each with one sub disk. Will vxassist automatically create the plex, attach it to the volume and start mirroring? Am I ok to repeat this command twice with different objects to get both new mirrors sync'ing at the same time? - check two new plex attached to testvol (vxprint -qthg testdg testvol) - check sync state compleeted (vxtask list) - Disassocaite EVA plex when sync state completed (vxmend -g testdg off testvol-01; vxplex -g testdg dis testvol-01) - Delete EVA plex (vxedit -g testdg -rf rm testvol-01) - Unmask EVA storage and cleanup using cfgadm on both nodes - Take second channel from active node and plug to SAN fabric 2 - rescan using qlogic driver to pick up second leg to lun - verify with powermt display dev=all - cable 2nd node in second data centre to both new fabrics and scan using qlogic driver - check 2nd node using powermt display dev=all Can the VEA GUI be used to carry out the same as the above commands that I've researched? Thanks in advance, SarahSolved1.8KViews4likes6CommentsCFSumount hangs on RHEL6.1 with veritas 5.1SP1PR2
Hello, I'm using two RHEL6.1 servers with veritas 5.1SP1PR2 cluster file system. Cluster starts ok and I can switch services to other node etc but when I execute commands hastop -local or hastop -all then the cluster hangs. It even makes one of the nodes to hang totally and no commands cannot be executed as root. What I can see is that it hangs when it does the cfsumount for shared mount points. Some of them (changes timeto time so not the same mount points) are hanging and appead again to the mount list but not as real mounts but as copies of root /. example below -bash-4.1# df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 8256952 5710140 2127384 73% / tmpfs 18503412 424 18502988 1% /dev/shm /dev/sda1 253871 37102 203662 16% /boot /dev/mapper/VolGroup00-LogVol04 12385456 863508 10892804 8% /opt /dev/mapper/VolGroup00-LogVol02 8256952 428260 7409264 6% /tmp /dev/mapper/VolGroup00-LogVol03 12385456 5429820 6326492 47% /var tmpfs 4 0 4 0% /dev/vx /dev/vx/dsk/mountdg/mymount 8256952 5710140 2127384 73% /mymount From here you can see that the data values are the same between my cfsmount (mymount) and root /. I can survive from this using command umount/mymount It throws an error umount: /mymount: not mounted but still ater this cluster continues to go down. This is justa workaround and I do not want to leave it like this. Any ideas how to fix this? Is there a patch for this or should I change something on RHEL or in veritas? br, JP434Views0likes1CommentRHEL vxdisk list <devname> output and partition alignment
I've got two questions, which I'm hoping I can get some assistance with. Environment: RHEL Linux 5.7 server with connectivity to an EMC VNX storage array. 1) I wanted to get clarification on my understand of a particular section (highlighted in bold) of the vxdisk list <devname> output: Device: emc_clariion0_10 devicetag: emc_clariion0_10 type: auto hostid: xxxx disk: name=emc_clariion0_10 id=1317280593.32.xxxx group: name=xxxx id=1317280602.40.xxxx info: format=cdsdisk,privoffset=208,pubslice=3,privslice=3 flags: online ready private autoconfig autoimport imported pubpaths: block=/dev/vx/dmp/emc_clariion0_10s3 char=/dev/vx/rdmp/emc_clariion0_10s3 guid: - udid: DGC%5FRAID%2010%5FAPM00113001727%5F600601606AA02D00BEF97BDA03E8E011 site: - version: 4.1 iosize: min=512 (bytes) max=1024 (blocks) public: slice=3 offset=65744 len=943652560 disk_offset=48 private: slice=3 offset=208 len=65536 disk_offset=48 update: time=1320430039 seqno=0.27 ssb: actual_seqno=0.0 headers: 0 240 configs: count=1 len=51360 logs: count=1 len=4096 Defined regions: config priv 000048-000239[000192]: copy=01 offset=000000 enabled config priv 000256-051423[051168]: copy=01 offset=000192 enabled log priv 051424-055519[004096]: copy=01 offset=000000 enabled lockrgn priv 055520-055663[000144]: part=00 offset=000000 Multipathing information: numpaths: 8 sdr state=enabled type=primary sdv state=enabled type=secondary sdb state=enabled type=primary sdf state=enabled type=secondary sdz state=enabled type=primary sdad state=enabled type=secondary sdj state=enabled type=primary sdn state=enabled type=secondary What do the disk_offset and offset values define in this output and what units are they in? My assumption is that the disk_offset is saying that data gets written starting at block 48 on the disk device. For the offset value, does that define the block # that the region begins at also, e.g. the private region begins at block 208? The only other possibility I can think of is that the offset values are defining how many blocks beyond the disk_offset each region begins at, e.g. the private region starts at 48+208 = block 256 and public starts at 65744 + 48 = block 65792. Unfortunately, it's not real obvious to me :( 2) If I wanted to perform partition alignment of the VXVM volumes, e.g. to align on the 1 MiB EMC-recommended boundary for a CLARiiON/VNX LUNs, would I just use the puboffset switch with the vxdisksetup command and specify the block count (2048)? Thanks, Jeff566Views0likes3CommentsI do not understand the situation arose
Hi Friends I do not understand the situation arose All i/o channels active stated. san-switch device firmware upgrades. Was a sudden san-switch reboot . the san-swtich of the i/o channels disable. the normal state of other channels For a while, but the service was not 30 second. I wonder why this happened. I think, 30 seconds as the service stops, the dmp reforming. Tell me, what was the other possibility. thanks Operating system: HPUX Operating system version: 11.31 Architecture: ia64 Server model: ia64 hp superdome server SD32B veritas version : CFS5.0 (vxvm 5.0RP6, vxfs5.0RP6HF1) Switch : BROCADE SilkWorm 48000 storage : EMC dmx series syslog messages. ul 11 14:41:28 eaiap1p vmunix: 0/0/14/1/0: Fibre Channel Driver received Link Dead Notification. Jul 11 14:41:28 eaiap1p vmunix: Jul 11 14:41:28 eaiap1p vmunix: 2/0/14/1/0: Fibre Channel Driver received Link Dead Notification. Jul 11 14:41:28 eaiap1p vmunix: class : tgtpath, instance 5 Jul 11 14:41:28 eaiap1p vmunix: Target path (class=tgtpath, instance=5) has gone offline. The target path h/w path is 0/0/14/1/0.0 x5006048449af3675 Jul 11 14:41:28 eaiap1p vmunix: class : tgtpath, instance 4 Jul 11 14:41:28 eaiap1p vmunix: Target path (class=tgtpath, instance=4) has gone offline. The target path h/w path is 2/0/14/1/0.0 x5006048449af3676 ....skip Jul 11 14:41:31 eaiap1p vmunix: NOTICE: VxVM vxdmp V-5-0-112 disabled path 255/0x1c0 belonging to the dmpnode 5/0xc0 Jul 11 14:41:31 eaiap1p vmunix: NOTICE: VxVM vxdmp V-5-0-112 disabled path 255/0x1d0 belonging to the dmpnode 5/0x90 Jul 11 14:41:31 eaiap1p vmunix: NOTICE: VxVM vxdmp V-5-0-112 disabled path 255/0x190 belonging to the dmpnode 5/0x200 Jul 11 14:41:31 eaiap1p vmunix: NOTICE: VxVM vxdmp V-5-0-112 disabled path 255/0x290 belonging to the dmpnode 5/0x80 Jul 11 14:41:31 eaiap1p vmunix: NOTICE: VxVM vxdmp V-5-0-112 disabled path 255/0x260 belonging to the dmpnode 5/0x1a0 ....skip Jul 11 14:44:57 eaiap1p vmunix: Target path (class=tgtpath, instance=4) has gone online. The target path h/w path is 2/0/14/1/0.0x 5006048449af3676 Jul 11 14:45:17 eaiap1p vmunix: class : tgtpath, instance 5 Jul 11 14:44:57 eaiap1p vmunix: Jul 11 14:45:17 eaiap1p above message repeats 11 times Jul 11 14:45:17 eaiap1p vmunix: Target path (class=tgtpath, instance=5) has gone online. The target path h/w path is 0/0/14/1/0.0x 5006048449af3675Solved1.1KViews0likes2Comments