How much space is required in a VxFS file system for the recover file as used by the fscdsconv utility
Hi all, I'm going to migrate a VXVM/VXFS (vers 5.1SP1RP4) volume/file system Solaris Sparc platform to VXVM/VXFS 6.03 Linux 6.5 platform, the file system size is 2TB space, real 1TB.... how much space is required for utility fscdsconv during conversion, there is a formula for this job? Thanks a lot Cecilia1KViews4likes1CommentArray Migration using VxVM on Solaris VCS cluster
Hi, We lost our Unix admin a few months ago who usually administered VxVM for us and now I'm in a position where I need to migrate volumes between arrays. Unfortunately no documentation of how this was successfully achived in the past was taken so I'm looking for some help. I've seen a number of posts that relate to this but am posting a series of questions again as I'm new to veritas. The cluster is: - Solaris 9 - VCS 5.0 and VxVM 5.0 MP1 two node straetched cluster - Each node has its own storage array and zoning to the EVA and DMX in each data centre - Qlogic HBAs and Native Sun Driver - Current Array: HP EVA - Target Array: EMC DMX - Current SAN: Brocade (HP-badged) - Target SAN: Brocade Migration Plan (with loads of questions) is: - EMC PowerPath has been installed for multipathing on DMX a few weeks back - Freeze cluster in main data centre - this node to be used for migration - Take first channel out of the current SAN fabric1 and plug it into new SAN fabric 1 in main data centre on active, frozen node. - Leave both channels from standby node in 2nd data centre in EVA fabrics for now - Zone and Mask target LUNs from data cetnre 1 and 2 on single HBA in SAN fabric 1. - Discover LUNs (cfgadm) - DMX storage managed by PowerPath so list devices using powermt display dev=all to map devices to actual array/LUN - Initialise disk in VxVM (vxdisksetup -i emcpower56) - repeat for all new LUNs - Add DMX LUNs to disk groups (vxdg -g testdg adddisk testdgdmx=emcpower56) - repeat for all new LUNs - Add plexes and mirror (vxassist -g testdg mirror testvol emcpower56) The existing volumes have two plexes, one from each data centre each with one sub disk. Will vxassist automatically create the plex, attach it to the volume and start mirroring? Am I ok to repeat this command twice with different objects to get both new mirrors sync'ing at the same time? - check two new plex attached to testvol (vxprint -qthg testdg testvol) - check sync state compleeted (vxtask list) - Disassocaite EVA plex when sync state completed (vxmend -g testdg off testvol-01; vxplex -g testdg dis testvol-01) - Delete EVA plex (vxedit -g testdg -rf rm testvol-01) - Unmask EVA storage and cleanup using cfgadm on both nodes - Take second channel from active node and plug to SAN fabric 2 - rescan using qlogic driver to pick up second leg to lun - verify with powermt display dev=all - cable 2nd node in second data centre to both new fabrics and scan using qlogic driver - check 2nd node using powermt display dev=all Can the VEA GUI be used to carry out the same as the above commands that I've researched? Thanks in advance, SarahSolved1.8KViews4likes6Commentswould vxvm mirroring activities take lower priority than normal write activity before it is synced up?
hi, I have a question here, hoping someone could point out perhaps. is a vxvm mirroring event (before it is sync-ed up) takes lower priority than a normal I/O data write? the reason I am asking is because if during vxvm mirror setup, would it slow down normal data I/O it would make sense to lower the priority of these writes than normal data I/O I would think. thanks in advance.Solved859Views3likes1Commentunable to remove a disabled dmp path without reboot on solaris 10
here is my problem, i have a dmpnode, and one of the 2 WWN has been rearranged from the array side. so it generated the some disabled paths. my problem is how to remove these disabled paths without disrupting the current vxfs mounts. at the moment, cfgadm sees these paths as failing even though format sees them as offline. luxadm -e offline $path didn't help.4KViews3likes6CommentsTroubleshooting volume performance in Storage Foundation
Hi, all. We have released a set of articles about troubleshooting volume performance in Storage Foundation. Here is the link: "Troubleshooting volume performance in Veritas Storage Foundation" http://www.symantec.com/docs/TECH202712 Since this is a broad topic, the "technote" is actually a set of about a dozen article that have been organized into a logical tree structure, with TECH202712 at its "root." Let us know what you think! Regards, Mike754Views3likes0Commentshow can list of mountpoint can be display using cfsmntadm
Hi All, Is there any one can help me? I have 4 server already been configured using veritas cluster and this server already production. i wanna get my list of mountpoint can be display using command "cfsmntadm display". based on my knowledge this can effect the configuration. (CMIIW) im using RHEL5 and SF6. ThanksSolved5.3KViews3likes17CommentsIs volume relayout command possible ?
Hello. I needyour help . I wonder if relayout command will be available for work. ( Stripe layout, ncol=14 to stripe layout, ncol=7 ) I will use the following command.: (volume relayout ncol 14 -> ncol 7 ) # vxassist -g ccrmapvg01 relayout lvol01 layout=stripe ncol=7 Is it possible? If you need anything let me know. environment information : OS: HP-UX 11.31 SFCFS version : SFCFS 5.0RP6 ============== vxdg list ============ NAME STATE ID ccrmapvg11 enabled,shared,cds 1139668239.157.ccrmap1p ============== vxprint ============ dg ccrmapvg11 default default 49000 1139668239.157.ccrmap1p dm c35t0d4 c38t0d4 auto 1024 47589888 - dm c35t0d5 c38t0d5 auto 1024 47589888 - dm c35t0d6 c38t0d6 auto 1024 47589888 - dm c35t0d7 c38t0d7 auto 1024 47589888 - dm c35t1d0 c38t1d0 auto 1024 47589888 - dm c35t1d1 c38t1d1 auto 1024 47589888 - dm c35t1d2 c38t1d2 auto 1024 47589888 - dm c35t1d3 c38t1d3 auto 1024 47589888 - dm c35t1d4 c38t1d4 auto 1024 47589888 - dm c35t1d5 c38t1d5 auto 1024 47589888 - dm c35t1d6 c38t1d6 auto 1024 47589888 - dm c35t1d7 c38t1d7 auto 1024 47589888 - dm c35t2d0 c38t2d0 auto 1024 47589888 - dm c35t2d1 c38t2d1 auto 1024 47589888 - dm c35t2d2 c38t2d2 auto 1024 47589888 - dm c35t2d3 c38t2d3 auto 1024 47589888 - dm c35t2d4 c38t2d4 auto 1024 47589888 - dm c35t2d5 c38t2d5 auto 1024 47589888 - dm c35t2d6 c38t2d6 auto 1024 47589888 - dm c35t2d7 c38t2d7 auto 1024 47589888 - dm c35t3d0 c38t3d0 auto 1024 47589888 - v lvol01 - ENABLED ACTIVE 666257408 SELECT lvol01-01 fsgen pl lvol01-01 lvol01 ENABLED ACTIVE 666257536 STRIPE 14/64 RW sd c35t0d4-01 lvol01-01 c35t0d4 0 47589824 0/0 c38t0d4 ENA sd c35t0d5-01 lvol01-01 c35t0d5 0 47589824 1/0 c38t0d5 ENA sd c35t0d6-01 lvol01-01 c35t0d6 0 47589824 2/0 c38t0d6 ENA sd c35t0d7-01 lvol01-01 c35t0d7 0 47589824 3/0 c38t0d7 ENA sd c35t1d0-01 lvol01-01 c35t1d0 0 47589824 4/0 c38t1d0 ENA sd c35t1d1-01 lvol01-01 c35t1d1 0 47589824 5/0 c38t1d1 ENA sd c35t1d2-01 lvol01-01 c35t1d2 0 47589824 6/0 c38t1d2 ENA sd c35t1d3-01 lvol01-01 c35t1d3 0 47589824 7/0 c38t1d3 ENA sd c35t1d4-01 lvol01-01 c35t1d4 0 47589824 8/0 c38t1d4 ENA sd c35t1d5-01 lvol01-01 c35t1d5 0 47589824 9/0 c38t1d5 ENA sd c35t1d6-01 lvol01-01 c35t1d6 0 47589824 10/0 c38t1d6 ENA sd c35t1d7-01 lvol01-01 c35t1d7 0 47589824 11/0 c38t1d7 ENA sd c35t2d0-01 lvol01-01 c35t2d0 0 47589824 12/0 c38t2d0 ENA sd c35t2d1-01 lvol01-01 c35t2d1 0 47589824 13/0 c38t2d1 ENA ============== vxdg free ============ DISK DEVICE TAG OFFSET LENGTH FLAGS c35t0d4 c38t0d4 c38t0d4 47589824 64 - c35t0d5 c38t0d5 c38t0d5 47589824 64 - c35t0d6 c38t0d6 c38t0d6 47589824 64 - c35t0d7 c38t0d7 c38t0d7 47589824 64 - c35t1d0 c38t1d0 c38t1d0 47589824 64 - c35t1d1 c38t1d1 c38t1d1 47589824 64 - c35t1d2 c38t1d2 c38t1d2 47589824 64 - c35t1d3 c38t1d3 c38t1d3 47589824 64 - c35t1d4 c38t1d4 c38t1d4 47589824 64 - c35t1d5 c38t1d5 c38t1d5 47589824 64 - c35t1d6 c38t1d6 c38t1d6 47589824 64 - c35t1d7 c38t1d7 c38t1d7 47589824 64 - c35t2d0 c38t2d0 c38t2d0 47589824 64 - c35t2d1 c38t2d1 c38t2d1 47589824 64 - c35t2d2 c38t2d2 c38t2d2 0 47589888 - c35t2d3 c38t2d3 c38t2d3 0 47589888 - c35t2d4 c38t2d4 c38t2d4 0 47589888 - c35t2d5 c38t2d5 c38t2d5 0 47589888 - c35t2d6 c38t2d6 c38t2d6 0 47589888 - c35t2d7 c38t2d7 c38t2d7 0 47589888 - c35t3d0 c38t3d0 c38t3d0 0 47589888 - ============== bdf2 ============ dev/vx/dsk/ccrmapvg11/lvol01 666257408 554988262 104314825 84% /nbsftp4Solved811Views3likes4CommentsMigrating VxVM volumes from a V240 to a T4-2 Ldom
Hi all, I have a V240 running Solaris 10U08 with VxVM 6.0.000 that I need to migrate to a Solaris T4-2 Ldom (LDM 3.1). I have loaded the software that is in source onto the LDOM and presented the required SAN LUNs that match source into the guest LDOM but when I try to set up a disk (LUN) in VxVM using vxdisksetup I get: "ERROR V-5-2-43 c0d3: Invalid disk device for vxdisksetup". I have tried using vxdisk init c0d3s2and get: "Disk sector size is not supported" and tried vxdisk init c0d3s2 format=sliced and get: "Disk VTOC does not list private partition"Don't know xhat else to try !!!1.8KViews2likes8Commentshow to delsec rvg when the primary is unknown?
i am on the secondary of a rvg. but the primary is not connected. how can i remove the rvg stuff? i tried vradmin -g $dg -f delsec $rvg but it gave me this error: VxVM VVR vradmin ERROR V-5-52-449 Secondary $rvg does not have an active Primary.Solved1.4KViews2likes9CommentsLearning Storage Foundation
Hi All, I am interested in learning storage foundation for unix / solaris and windows. I have discussed with my fellow colleagues at work to get some understanding of the product itself and have seen some videos too. Can someone guide me on setting up a lab for storage foundation, my understanding is : Storage Foundation is a Suite of products, which includes : Veritas filesystem Veritas Volume Manager Veritas Volume Replicator What is Storage Foundation HA ? Is it a software or a suite with the above products and HA funcionality? Also, any tips in learning this product and having a play around testing will be really helpful ThanksSolved759Views2likes4Comments