How much space is required in a VxFS file system for the recover file as used by the fscdsconv utility
Hi all, I'm going to migrate a VXVM/VXFS (vers 5.1SP1RP4) volume/file system Solaris Sparc platform to VXVM/VXFS 6.03 Linux 6.5 platform, the file system size is 2TB space, real 1TB.... how much space is required for utility fscdsconv during conversion, there is a formula for this job? Thanks a lot Cecilia1KViews4likes1CommentArray Migration using VxVM on Solaris VCS cluster
Hi, We lost our Unix admin a few months ago who usually administered VxVM for us and now I'm in a position where I need to migrate volumes between arrays. Unfortunately no documentation of how this was successfully achived in the past was taken so I'm looking for some help. I've seen a number of posts that relate to this but am posting a series of questions again as I'm new to veritas. The cluster is: - Solaris 9 - VCS 5.0 and VxVM 5.0 MP1 two node straetched cluster - Each node has its own storage array and zoning to the EVA and DMX in each data centre - Qlogic HBAs and Native Sun Driver - Current Array: HP EVA - Target Array: EMC DMX - Current SAN: Brocade (HP-badged) - Target SAN: Brocade Migration Plan (with loads of questions) is: - EMC PowerPath has been installed for multipathing on DMX a few weeks back - Freeze cluster in main data centre - this node to be used for migration - Take first channel out of the current SAN fabric1 and plug it into new SAN fabric 1 in main data centre on active, frozen node. - Leave both channels from standby node in 2nd data centre in EVA fabrics for now - Zone and Mask target LUNs from data cetnre 1 and 2 on single HBA in SAN fabric 1. - Discover LUNs (cfgadm) - DMX storage managed by PowerPath so list devices using powermt display dev=all to map devices to actual array/LUN - Initialise disk in VxVM (vxdisksetup -i emcpower56) - repeat for all new LUNs - Add DMX LUNs to disk groups (vxdg -g testdg adddisk testdgdmx=emcpower56) - repeat for all new LUNs - Add plexes and mirror (vxassist -g testdg mirror testvol emcpower56) The existing volumes have two plexes, one from each data centre each with one sub disk. Will vxassist automatically create the plex, attach it to the volume and start mirroring? Am I ok to repeat this command twice with different objects to get both new mirrors sync'ing at the same time? - check two new plex attached to testvol (vxprint -qthg testdg testvol) - check sync state compleeted (vxtask list) - Disassocaite EVA plex when sync state completed (vxmend -g testdg off testvol-01; vxplex -g testdg dis testvol-01) - Delete EVA plex (vxedit -g testdg -rf rm testvol-01) - Unmask EVA storage and cleanup using cfgadm on both nodes - Take second channel from active node and plug to SAN fabric 2 - rescan using qlogic driver to pick up second leg to lun - verify with powermt display dev=all - cable 2nd node in second data centre to both new fabrics and scan using qlogic driver - check 2nd node using powermt display dev=all Can the VEA GUI be used to carry out the same as the above commands that I've researched? Thanks in advance, SarahSolved1.8KViews4likes6CommentsMigrating to a new SAN
We're in the process of moving to new datacenters. All servers will be moved over but the SAN won't. The SAN will be replicated to a new SAN in the new datacenters by our SAN admins. That means, the LUNs in the new SAN will be identical to the old, also the LUN numbers. Example output from'vxdisk list' on the current host shows: c1t50050768014052E2d129s2 auto:cdsdisk somedisk01 somedg online This disk will get same LUN nr, but the target name will probably differ as it's new hardware in the new DC. Will this work with Veritas SFHA? If not, is there any way to do this the way the datacenter project wants to? If I could choose to do this my way, I would presentthe LUNs on the new SANto my servers so that I could do a normal host based migration. Add the new LUN to the diskgroup and mirror the data, then remove the old LUN. However, I'm told that the hosts in the current datacenter won't be able to see the new SAN.Solved2.2KViews2likes3CommentsVeritas Storage Foundation 5.0 - Error code
Dear All, We have Veritas storage foundation 5.0 HA with VVR, VCS, VXVM, VXFS. We are interested to find the guide of all Error code of above mentioned product as there is available in Netbackup. Please reply with the link of the said guide. Regards, DharmeshSolved1.4KViews2likes9CommentsHow to expand a dg "vgbackup" and after a filesystem "/prod01p"
I have a dg with 44 LUNs (100 GB) and I'm allocating more 13 LUNs for expansion, getting in the end with 57 LUNS. Currently the dg this with 44 striped. v vol_prod01p - ENABLED ACTIVE 9219080192 SELECT vol_prod01p-01 fsgen pl vol_prod01p-01 vol_prod01p ENABLED ACTIVE 9219082752 STRIPE 44/128 RW now insert the new disks, as follows: vxdg -g vgbackup adddisk vgbackup250=emc_clariion0_250 vxdg -g vgbackup adddisk vgbackup253=emc_clariion0_253 vxdg -g vgbackup adddisk vgbackup255=emc_clariion0_255 vxdg -g vgbackup adddisk vgbackup257=emc_clariion0_257 vxdg -g vgbackup adddisk vgbackup264=emc_clariion0_264 vxdg -g vgbackup adddisk vgbackup265=emc_clariion0_265 vxdg -g vgbackup adddisk vgbackup266=emc_clariion0_266 vxdg -g vgbackup adddisk vgbackup267=emc_clariion0_267 vxdg -g vgbackup adddisk vgbackup268=emc_clariion0_268 vxdg -g vgbackup adddisk vgbackup269=emc_clariion0_269 vxdg -g vgbackup adddisk vgbackup271=emc_clariion0_271 vxdg -g vgbackup adddisk vgbackup272=emc_clariion0_272 vxdg -g vgbackup adddisk vgbackup273=emc_clariion0_273 from this step which the commands should I use for expansion of the dg subsequently increase the filesystem with these new discs. Information : /dev/vx/dsk/vgbackup/vol_prod01p 4.3T 4.0T 307G 94% /prod01p root@BMG01 # vxassist -g vgbackup maxsize Maximum volume size: 2730465280 (1333235Mb) Thanks.Solved1.6KViews1like7CommentsHow to break root disk Mirror in VxVM
Hi All bash-3.00# vxprint -g rootdg -vh rootvol TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 v rootvol root ENABLED 60821952 - ACTIVE - - pl rootvol-01 rootvol ENABLED 60821952 - ACTIVE - - sd rootdg01-B0 rootvol-01 ENABLED 1 0 - - Block0 sd rootdg01-02 rootvol-01 ENABLED 60821951 1 - - - pl rootvol-02 rootvol ENABLED 60821952 - ACTIVE - - sd rootdg02-01 rootvol-02 ENABLED 60821952 0 - - - bash-3.00# df -h / Filesystem size used avail capacity Mounted on /dev/vx/dsk/bootdg/rootvol 29G 19G 9.4G 67% / 1) From above configuration we see root file system is configured on volume rootvol which is a mirror. Now i'd like to break the mirror and keep the mirror copy for backout purpose as i will be upgrading on the actual root disk. I do not want to delete the plexes or the mirror copy. Suppose in SVM, d0 is a mirror and d10 and d20 are its submirrors, we issue metadetach d0 d20 command to detach the submirror. How do we accomplish the same in the above VxVM configuration ? 2) Plexrootvol-02 has only 1 subdisk rootdg02-01, whereas Plex rootvol-01 has 2 subdisks rootdg01-B0 and rootdg01-02. What is the significance of having 2 subdisks for the plex rootvol-01 ? If the Plexrootvol-01is a mirrored copy of another plexrootvol-02 then the size and number of subdisks in each plex should be the same or not ? ===================================================================================================== use-nvramrc?=true nvramrc=devalias vx-rootdg01 /pci@1f,700000/scsi@2/disk@0,0:a devalias vx-rootdg02 /pci@1f,700000/scsi@2/disk@1,0:a 3) Once the root volume plex has been disassociated can we still use both the above listed device aliases to boot OS from ok prompt ? ok> boot vx-rootdg01 ok> boot vx-rootdg02 Thank you everybody for your response as always. Response is highly appreciated. Regards, Danish.Solved3KViews1like7CommentsFlag status unknown
Hi All, I am trying to configure the nfs under VCS and getting the Flag status unknown error for NFSRestart service. So can you please suggest me the solution to this problem. Attached is the screenshot of the vcs console and main.cf configuration file. Thanks!!!!!!!!!!!! RavinderSolved2.7KViews1like5Comments