How much space is required in a VxFS file system for the recover file as used by the fscdsconv utility
Hi all, I'm going to migrate a VXVM/VXFS (vers 5.1SP1RP4) volume/file system Solaris Sparc platform to VXVM/VXFS 6.03 Linux 6.5 platform, the file system size is 2TB space, real 1TB.... how much space is required for utility fscdsconv during conversion, there is a formula for this job? Thanks a lot Cecilia1KViews4likes1CommentArray Migration using VxVM on Solaris VCS cluster
Hi, We lost our Unix admin a few months ago who usually administered VxVM for us and now I'm in a position where I need to migrate volumes between arrays. Unfortunately no documentation of how this was successfully achived in the past was taken so I'm looking for some help. I've seen a number of posts that relate to this but am posting a series of questions again as I'm new to veritas. The cluster is: - Solaris 9 - VCS 5.0 and VxVM 5.0 MP1 two node straetched cluster - Each node has its own storage array and zoning to the EVA and DMX in each data centre - Qlogic HBAs and Native Sun Driver - Current Array: HP EVA - Target Array: EMC DMX - Current SAN: Brocade (HP-badged) - Target SAN: Brocade Migration Plan (with loads of questions) is: - EMC PowerPath has been installed for multipathing on DMX a few weeks back - Freeze cluster in main data centre - this node to be used for migration - Take first channel out of the current SAN fabric1 and plug it into new SAN fabric 1 in main data centre on active, frozen node. - Leave both channels from standby node in 2nd data centre in EVA fabrics for now - Zone and Mask target LUNs from data cetnre 1 and 2 on single HBA in SAN fabric 1. - Discover LUNs (cfgadm) - DMX storage managed by PowerPath so list devices using powermt display dev=all to map devices to actual array/LUN - Initialise disk in VxVM (vxdisksetup -i emcpower56) - repeat for all new LUNs - Add DMX LUNs to disk groups (vxdg -g testdg adddisk testdgdmx=emcpower56) - repeat for all new LUNs - Add plexes and mirror (vxassist -g testdg mirror testvol emcpower56) The existing volumes have two plexes, one from each data centre each with one sub disk. Will vxassist automatically create the plex, attach it to the volume and start mirroring? Am I ok to repeat this command twice with different objects to get both new mirrors sync'ing at the same time? - check two new plex attached to testvol (vxprint -qthg testdg testvol) - check sync state compleeted (vxtask list) - Disassocaite EVA plex when sync state completed (vxmend -g testdg off testvol-01; vxplex -g testdg dis testvol-01) - Delete EVA plex (vxedit -g testdg -rf rm testvol-01) - Unmask EVA storage and cleanup using cfgadm on both nodes - Take second channel from active node and plug to SAN fabric 2 - rescan using qlogic driver to pick up second leg to lun - verify with powermt display dev=all - cable 2nd node in second data centre to both new fabrics and scan using qlogic driver - check 2nd node using powermt display dev=all Can the VEA GUI be used to carry out the same as the above commands that I've researched? Thanks in advance, SarahSolved1.8KViews4likes6CommentsFS is frequently facing i/o errors
Hi All, I am using storage foundation 6.1 with Solaris10 u10. FS are frequently facinf i/o errors. I am unounting the FS and mounting it back. It is mounting it back without any issues. but I would like to why it is happening regularly. Below output is showing when I am trying to get the FS version. Is it causing the issue? or why it is showing below unrecognized superblock. Please advice. Thanks in advance. fstyp -v /dev/vx/dsk/XXXX/XXXX|grep -i version Unrecognized Superblock. error = 5052 magic XXXX version 7 ctime Wed Oct 03 20:35:59 2012Solved952Views3likes4Commentsunable to remove a disabled dmp path without reboot on solaris 10
here is my problem, i have a dmpnode, and one of the 2 WWN has been rearranged from the array side. so it generated the some disabled paths. my problem is how to remove these disabled paths without disrupting the current vxfs mounts. at the moment, cfgadm sees these paths as failing even though format sees them as offline. luxadm -e offline $path didn't help.3.9KViews3likes6CommentsRecover deleted file
Hello. I've a vxfs filesystem (vxfs 5.0). Yesterday I've accidentally deleted a directory containing about 1000 files. Today I've remounted the filesystem readonly; I want to try to recover that deleted files (if they are not already overridden). Exists some tool to do that? Thankyou Matteo2KViews2likes7CommentsVXVM Strip volume extension
Hello, I have the following issue. sunstation1:/root# vxassist -g DG-DB001 maxsize Maximum volume size: 326191104 (159273Mb)------------------------->It is showing that volume can be increased up to 159273Mb sunstation1:/root# sunstation1:/root# sunstation1:/root# /etc/vx/bin/vxresize -g DG-DB001 VOL-DB001-data1 +50g VxVM vxassist ERROR V-5-1-436 Cannot allocate space to grow volume to 1498001965 blocks VxVM vxresize ERROR V-5-1-4703 Problem running vxassist command for volume VOL-DB001-data1, in diskgroup DG-DB001 Is it due to any internal restriction of the Volume manager. regards ArupSolved4KViews2likes8Commentsfailed to create a Veritas dg
Hi, I'm creating a DG with 6 LUNs. The LUNs have 1 TB each. The disks are as below: When you configure a disk error message appears: emc_clariion0_264 auto - - nolabel emc_clariion0_265 auto - - nolabel emc_clariion0_266 auto - - nolabel emc_clariion0_267 auto - - nolabel emc_clariion0_268 auto - - nolabel emc_clariion0_269 auto - - nolabel root@ # vxdisksetup -i emc_clariion0_264 format=cdsdisk c5t500601663EE00243d228s2 VxVM vxdisksetup ERROR V-5-2-5241 Cannot label as disk geometry cannot be obtained. root@ # root@ # vxdisksetup -i emc_clariion0_265 format=cdsdisk c5t5006016F3EE00243d241s2 VxVM vxdisksetup ERROR V-5-2-5241 Cannot label as disk geometry cannot be obtained. root@ # ... ... root@ # vxdisk list emc_clariion0_264 Device: emc_clariion0_264 devicetag: emc_clariion0_264 type: auto flags: nolabel private autoconfig pubpaths: block=/dev/vx/dmp/emc_clariion0_264 char=/dev/vx/rdmp/emc_clariion0_264 guid: - udid: DGC%5FVRAID%5FCKM00123600618%5F6006016025303100932DF87904C4E411 site: - errno: Device path not valid Multipathing information: numpaths: 8 c7t5006016E3EE00243d228s2 state=enabled type=secondary c7t500601673EE00243d228s2 state=enabled type=primary c5t5006016F3EE00243d228s2 state=enabled type=secondary c5t500601663EE00243d228s2 state=enabled type=primary c6t500601663EE00243d228s2 state=enabled type=primary c6t5006016F3EE00243d228s2 state=enabled type=secondary c8t5006016E3EE00243d228s2 state=enabled type=secondary c8t500601673EE00243d228s2 state=enabled type=primary root@BMG01 # Can someone help me solve the problem? Thanks. MarconiSolved4.6KViews2likes12CommentsMigrating to a new SAN
We're in the process of moving to new datacenters. All servers will be moved over but the SAN won't. The SAN will be replicated to a new SAN in the new datacenters by our SAN admins. That means, the LUNs in the new SAN will be identical to the old, also the LUN numbers. Example output from'vxdisk list' on the current host shows: c1t50050768014052E2d129s2 auto:cdsdisk somedisk01 somedg online This disk will get same LUN nr, but the target name will probably differ as it's new hardware in the new DC. Will this work with Veritas SFHA? If not, is there any way to do this the way the datacenter project wants to? If I could choose to do this my way, I would presentthe LUNs on the new SANto my servers so that I could do a normal host based migration. Add the new LUN to the diskgroup and mirror the data, then remove the old LUN. However, I'm told that the hosts in the current datacenter won't be able to see the new SAN.Solved2.2KViews2likes3CommentsIssue while Extending file system in vxvm
Hi, I have a case as below, Please extend below file systems from 30G to 50G on server island. /dev/vx/dsk/VRP_PRD_ora_data_dg/sapdata2 30G 15G 14G 52% /oracle/VRP/sapdata2 /dev/vx/dsk/VRP_PRD_ora_data_dg/sapdata4 30G 15G 14G 51% /oracle/VRP/sapdata4 /dev/vx/dsk/VRP_PRD_ora_data_dg/sapdata1 30G 14G 15G 49% /oracle/VRP/sapdata1 /dev/vx/dsk/VRP_PRD_ora_data_dg/sapdata3 30G 27G 2.7G 91% /oracle/VRP/sapdata3 root@myers# vxdisk -o alldgs -e list DEVICE TYPE DISK GROUP STATUS OS_NATIVE_NAME ATTR emcpower22s2 auto VRP_PRD_sapcent_dg01 VRP_PRD_sapcent_dg online emcpower22c - emcpower23s2 auto emcpower23s2 VRP_PRD_ora_data_dg online emcpower23c - emcpower24s2 auto emcpower24s2 VRP_PRD_ora_data_dg online emcpower24c - emcpower25s2 auto emcpower25s2 VRP_PRD_ora_data_dg online emcpower25c - emcpower26s2 auto emcpower26s2 VRP_PRD_ora_data_dg online emcpower26c - emcpower27s2 auto emcpower27s2 VRP_PRD_ora_bin_dg online emcpower27c - emcpower28s2 auto VRP_PRD_ora_arch_dg01 VRP_PRD_ora_arch_dg online emcpower28c - ====================================================================================== These are the 4 new luns of 30 GB assigned by Storage. 57. emcpower29e <EMC-SYMMETRIX-5874 cyl 29158 alt 2 hd 15 sec 128> /pseudo/emcp@29 58. emcpower30a <EMC-SYMMETRIX-5874 cyl 29158 alt 2 hd 15 sec 128> /pseudo/emcp@30 59. emcpower33a <EMC-SYMMETRIX-5874 cyl 29158 alt 2 hd 15 sec 128> /pseudo/emcp@33 60. emcpower34b <EMC-SYMMETRIX-5874 cyl 29158 alt 2 hd 15 sec 128> /pseudo/emcp@34 ==================================================================== When I tried to label them they are giving errors as, 57. emcpower29e <EMC-SYMMETRIX-5874 cyl 29158 alt 2 hd 15 sec 128> /pseudo/emcp@29 58. emcpower30a <EMC-SYMMETRIX-5874 cyl 29158 alt 2 hd 15 sec 128> /pseudo/emcp@30 59. emcpower33a <EMC-SYMMETRIX-5874 cyl 29158 alt 2 hd 15 sec 128> /pseudo/emcp@33 60. emcpower34b <EMC-SYMMETRIX-5874 cyl 29158 alt 2 hd 15 sec 128> /pseudo/emcp@34 Specify disk (enter its number): 57 selecting emcpower29e [disk formatted] Disk not labeled. Label it now? y Warning: error writing VTOC. Warning: no backup labels Write label failed FORMAT MENU: disk - select a disk type - select (define) a disk type partition - select (define) a partition table current - describe the current disk format - format and analyze the disk repair - repair a defective sector label - write label to the disk analyze - surface analysis defect - defect list management backup - search for backup labels verify - read and display labels save - save new disk/partition definitions inquiry - show vendor, product and revision volname - set 8-character volume name !<cmd> - execute <cmd>, then return =========================================================================== There are 4 lunsemcpower29e,emcpower30a,emcpower33a,emcpower34b which is require to initilize it and add them to theVRP_PRD_ora_data_dg diskgroup to increase filesystem to 20 GB more. Shall I have to proceed with the disk initilization in vxvm without labeling it or Please help me to proceed step by step activities required for this. Thanks,Solved3.7KViews2likes24Comments