Is volume relayout command possible ?
Hello. I need your help . I wonder if relayout command will be available for work. ( Stripe layout, ncol=14 to stripe layout, ncol=7 ) I will use the following command.: ( volume relayout ncol 14 -> ncol 7 ) # vxassist -g ccrmapvg01 relayout lvol01 layout=stripe ncol=7 Is it possible? If you need anything let me know. environment information : OS : HP-UX 11.31 SFCFS version : SFCFS 5.0RP6 ============== vxdg list ============ NAME STATE ID ccrmapvg11 enabled,shared,cds 1139668239.157.ccrmap1p ============== vxprint ============ dg ccrmapvg11 default default 49000 1139668239.157.ccrmap1p dm c35t0d4 c38t0d4 auto 1024 47589888 - dm c35t0d5 c38t0d5 auto 1024 47589888 - dm c35t0d6 c38t0d6 auto 1024 47589888 - dm c35t0d7 c38t0d7 auto 1024 47589888 - dm c35t1d0 c38t1d0 auto 1024 47589888 - dm c35t1d1 c38t1d1 auto 1024 47589888 - dm c35t1d2 c38t1d2 auto 1024 47589888 - dm c35t1d3 c38t1d3 auto 1024 47589888 - dm c35t1d4 c38t1d4 auto 1024 47589888 - dm c35t1d5 c38t1d5 auto 1024 47589888 - dm c35t1d6 c38t1d6 auto 1024 47589888 - dm c35t1d7 c38t1d7 auto 1024 47589888 - dm c35t2d0 c38t2d0 auto 1024 47589888 - dm c35t2d1 c38t2d1 auto 1024 47589888 - dm c35t2d2 c38t2d2 auto 1024 47589888 - dm c35t2d3 c38t2d3 auto 1024 47589888 - dm c35t2d4 c38t2d4 auto 1024 47589888 - dm c35t2d5 c38t2d5 auto 1024 47589888 - dm c35t2d6 c38t2d6 auto 1024 47589888 - dm c35t2d7 c38t2d7 auto 1024 47589888 - dm c35t3d0 c38t3d0 auto 1024 47589888 - v lvol01 - ENABLED ACTIVE 666257408 SELECT lvol01-01 fsgen pl lvol01-01 lvol01 ENABLED ACTIVE 666257536 STRIPE 14/64 RW sd c35t0d4-01 lvol01-01 c35t0d4 0 47589824 0/0 c38t0d4 ENA sd c35t0d5-01 lvol01-01 c35t0d5 0 47589824 1/0 c38t0d5 ENA sd c35t0d6-01 lvol01-01 c35t0d6 0 47589824 2/0 c38t0d6 ENA sd c35t0d7-01 lvol01-01 c35t0d7 0 47589824 3/0 c38t0d7 ENA sd c35t1d0-01 lvol01-01 c35t1d0 0 47589824 4/0 c38t1d0 ENA sd c35t1d1-01 lvol01-01 c35t1d1 0 47589824 5/0 c38t1d1 ENA sd c35t1d2-01 lvol01-01 c35t1d2 0 47589824 6/0 c38t1d2 ENA sd c35t1d3-01 lvol01-01 c35t1d3 0 47589824 7/0 c38t1d3 ENA sd c35t1d4-01 lvol01-01 c35t1d4 0 47589824 8/0 c38t1d4 ENA sd c35t1d5-01 lvol01-01 c35t1d5 0 47589824 9/0 c38t1d5 ENA sd c35t1d6-01 lvol01-01 c35t1d6 0 47589824 10/0 c38t1d6 ENA sd c35t1d7-01 lvol01-01 c35t1d7 0 47589824 11/0 c38t1d7 ENA sd c35t2d0-01 lvol01-01 c35t2d0 0 47589824 12/0 c38t2d0 ENA sd c35t2d1-01 lvol01-01 c35t2d1 0 47589824 13/0 c38t2d1 ENA ============== vxdg free ============ DISK DEVICE TAG OFFSET LENGTH FLAGS c35t0d4 c38t0d4 c38t0d4 47589824 64 - c35t0d5 c38t0d5 c38t0d5 47589824 64 - c35t0d6 c38t0d6 c38t0d6 47589824 64 - c35t0d7 c38t0d7 c38t0d7 47589824 64 - c35t1d0 c38t1d0 c38t1d0 47589824 64 - c35t1d1 c38t1d1 c38t1d1 47589824 64 - c35t1d2 c38t1d2 c38t1d2 47589824 64 - c35t1d3 c38t1d3 c38t1d3 47589824 64 - c35t1d4 c38t1d4 c38t1d4 47589824 64 - c35t1d5 c38t1d5 c38t1d5 47589824 64 - c35t1d6 c38t1d6 c38t1d6 47589824 64 - c35t1d7 c38t1d7 c38t1d7 47589824 64 - c35t2d0 c38t2d0 c38t2d0 47589824 64 - c35t2d1 c38t2d1 c38t2d1 47589824 64 - c35t2d2 c38t2d2 c38t2d2 0 47589888 - c35t2d3 c38t2d3 c38t2d3 0 47589888 - c35t2d4 c38t2d4 c38t2d4 0 47589888 - c35t2d5 c38t2d5 c38t2d5 0 47589888 - c35t2d6 c38t2d6 c38t2d6 0 47589888 - c35t2d7 c38t2d7 c38t2d7 0 47589888 - c35t3d0 c38t3d0 c38t3d0 0 47589888 - ============== bdf2 ============ dev/vx/dsk/ccrmapvg11/lvol01 666257408 554988262 104314825 84% /nbsftp4Solved811Views3likes4CommentsVxFS Performance problem after HPUX/Veritas upgrade
An an old machine - model "9000/800/rp4440 " (4 PA-RISC 8800 processors (1000 MHz, 64 MB)) with 16GB we have upgraded HPUX from 11.23 to 11.31 - Veritas from 3.5 to 4.1. Now, from time to time we have system with very high response time. Basically, all normal operations (login, directory listing, ... ) take very, very long time, even though the CPU usage during that period is very low. Also, memory system is occuped only 50%, pagging is not used at all, I/O seems not to be overloaded too. Sometimes this situation is solved after some period of time without intervention, but during working hour machine has to be restarted, because we can't wait too much such situation during the day. One of measureable issues we have detecetd is slow execution of lstat64 system call on vxfs partitions/mount points. We used TUSC to perform a 'll' command on a directory with 15k files. NFS mount points are not affected. For example: 06:31:46 [ls -l /test ]{2035012} (0.000054) lstat64("/test/FILE_00249395_20130301063957.Z", 0x77ff0468) = 0 st_dev: 64 0x000003 st_ino: 42172 st_mode: S_IFREG|0640 st_nlink: 1 st_rdev: 0 0x000000 st_size: 0 st_blksize: 8192 st_blocks: 0 st_uid: 0 st_gid: 3 st_aclv: 0 st_acl: 0 st_fstype: 10 st_atime: Thu Feb 26 15:29:17 2015 st_mtime: Thu Feb 26 15:29:17 2015 st_ctime: Thu Feb 26 15:29:17 2015 06:32:29 [ls -l /test ]{2035012} lstat64(0x40001888, 0x77ff0468) [running] 06:32:29 [ls -l /test ]{2035012} (42.357834) lstat64("/test/FILE_00249396_20130301064257.Z", 0x77ff0468) = 0 st_dev: 64 0x000003 st_ino: 42173 st_mode: S_IFREG|0640 st_nlink: 1 st_rdev: 0 0x000000 st_size: 0 st_blksize: 8192 st_blocks: 0 st_uid: 0 st_gid: 3 st_aclv: 0 st_acl: 0 st_fstype: 10 st_atime: Thu Feb 26 15:29:17 2015 st_mtime: Thu Feb 26 15:29:17 2015 st_ctime: Thu Feb 26 15:29:17 2015 06:32:29 [ls -l /test ]{2035012} (0.000063) lstat64("/test/FILE_00249397_20130301064557.Z", 0x77ff0468) = 0 As you can notice, there is a 42seconds of delay during the execution lstat64 system call. This is observed on a local vxfs drive (Veritas 4.1 is installed). The whole directory listing 'll' sometimes takes 15min, sometimes even 20min. When system is fresh reboted it takes few seconds. Just to note. System did not change in hardware or softvare after the upgrade. All issues staretd after the upgrade. I don't know if someone had similar issues after the upgrade to 11.31 (Veritas 4.1). If you can get a hint what should be checked I would appreaciate very much.717Views1like2CommentsStorage Foundation with VCS migration
Dear Experts, We have SFHA CFS with VCS 5.0 running on 2xHP Rx6600 Itanium Server with Oracle DB cluster. We are using VVR to replicate Oralce prod DB to replicate to DR site. All these version as of now are 5.0. We want to migrate to new Rx2800 i2 Itanium servers and want to use existing license. I checked and the tier of both servers model is same. My question are: 1. Can we install SFHA 5.1 SP1 on new Rx2800 i2 Server with same license keys? 2. The old server are running HPUX 11i v2 where as new will be HPUX 11i v3, will this have any issue is using old keys? 3. What are any other dependency/issue i should know in advance before we start this? Our team have misplaced all license docs and i can just find the license keys in current running server. I know it sounds stupid but we dont have SFHA 5.0 media and some how got a media with SFHA 5.1 SP1, so just thought of taking help in know what can be done and what cant. Thanks a lot for your efforts in advance..!!Solved1KViews1like4CommentsVersions of Veritas Volume manager
I am running Verias Volume manager on HPUX, version is B.05.10.01. There is embedded java that is version 1.6.0.06. Is there a more recent version that would bump up my Java to a higher version? Thanks Also, removing the embedded java would work.Solved2.3KViews1like6CommentsAnnouncing Storage Foundation High Availability Customer Forum 2012
We are excited to announce Storage Foundation High Availability Customer Forum 2012, a free learning event by the engineers for the engineers. Registration is open, register now to get on the priority list. The forum will take place at our Mountain View, CA offices on March 14th and 15th 2012. Join your peers and our engineers for two days of learning and knowledge sharing. The event features highly technical sessions tohelp you get more out of your days. Learn and share best practices from your peers in the industry and build a long lasting support network in the process Become a Power User by significantly increasing your troubleshooting and diagnostic skills as well as your product knowledge Engage with the engineers who architected and wrote the code for the products Please see here for event agenda and sessions details. More questions? See our events page.676Views1like0Comments