Veritas Storage Foundation - Volume Disabled After 'rmdisk'
Dear All, I added a LUN to a specific volume and realised that I added it to the wrong volume. To remove the LUN the following command was run : "vxdg -g dg rmdisk vsp-xx-xx" I was then prompted to run the " -k" option to remove the disk. However after re-running the command with the " -k" option : "vxdg -g dg -k rmdisk vsp-xx-xx" ... the volume went into a disabled state. Fortunately no data was lost once the "vxmend" was completed on the volume. I would just like to know if this was to be expected when running the above with the " -k" option ? RegardsSolved2KViews0likes4CommentsRAID5 Full Stripe Write
I created a fresh VxVM 6.0 RAID5 volume on four columns. I performed some basic I/O operations and noticed that writes always end in read-modify-write cycle. I haven't been able to trigger a full stripe writes yet, regardless of the command I used to write to the filesystem. I tried with 64k and with 128k stripe width but in both cases, I cannot get writes aligned. Filesystem is VxFS. Below are outputs of my simple tests and configuration of the RAID5 volume. Any hints where I got it wrong? 1. dd example root@mlincek:~# dd if=/dev/zero of=/mlincek/zero.dd bs=384k root@mlincek:~# iostat -zxn 2 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 69.0 115.4 8827.8 14777.0 0.0 0.4 0.0 2.3 0 41 c0t2d0 69.0 115.4 8827.8 14777.0 0.0 0.3 0.0 1.4 0 25 c0t3d0 70.0 115.4 8955.8 14777.0 0.0 0.4 0.0 2.0 0 36 c0t5d0 69.5 115.9 8891.8 14841.0 0.0 0.4 0.0 2.2 0 41 c0t4d0 2. Copy file from another physical volume using cp command: root@mlincek:~# cp testfile.tmp /mlincek/ extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 19.0 159.9 2250.5 20290.3 0.0 0.5 0.0 2.7 0 40 c0t2d0 17.5 158.4 2239.0 20278.9 0.0 0.3 0.0 1.5 0 16 c0t3d0 36.5 142.4 4489.5 18051.4 0.0 0.5 0.0 2.6 0 33 c0t5d0 37.0 143.4 4494.0 18119.8 0.0 0.5 0.0 3.0 0 36 c0t4d0 3. Volume config -bash-4.1# vxprint -g mlincekdg -t ... dg mlincekdg default default 28000 1333350408.27.mlincek dm mlincekdg01 disk_1 auto 65536 1953459328 - dm mlincekdg02 disk_2 auto 65536 1953459328 - dm mlincekdg03 disk_3 auto 65536 1953459328 - dm mlincekdg04 disk_4 auto 65536 1953459328 - sd mlincekdg01-01 mlincek-01 mlincekdg01 0 1953458944 0/0 disk_1 ENA sd mlincekdg02-01 mlincek-01 mlincekdg02 0 1953458944 1/0 disk_2 ENA sd mlincekdg03-01 mlincek-01 mlincekdg03 0 1953458944 2/0 disk_3 ENA sd mlincekdg04-01 mlincek-01 mlincekdg04 0 1953458944 3/0 disk_4 ENA pl mlincek-01 mlincek ENABLED ACTIVE 5860376832 RAID 4/256 RW v mlincek - ENABLED ACTIVE 5860376832 RAID - raid5 -bash-4.1# /opt/VRTSvxfs/sbin/vxtunefs -p /mlincek Filesystem i/o parameters for /mlincek read_pref_io = 131072 read_nstream = 4 read_unit_io = 131072 write_pref_io = 393216 write_nstream = 1 write_unit_io = 131072 ... Mount: /mlincek on /dev/vx/dsk/mlincekdg/mlincekread/write/setuid/devices/rstchown/delaylog/largefiles/ioerror=mwdisable/xattr/dev=3606d60 Volume was created with the following command: root@mlincek:~# vxassist -g mlincekdg -o ordered make mlincek 5860376576 layout=raid5 nlog=0 ncol=4 stripewidth=128k mlincekdg01 mlincekdg02 mlincekdg03 mlincekdg042KViews0likes7CommentsVXVM needs license ... please help
Dear all, i have installed VXVM 5.0 basic on my Virtual machine (Solaris 5.10) when i tried to mount the volume, i have received this error "UX:vxfs mount: ERROR: V-3-25255: mount: You don't have a license to run this program" is there any way out? i was looking in a forum, which described to check pkginfo -l VRTSvxfs | grep STATUS complete install ls -lad /devices/pseudo/vxportal it does not find these directories.... then i performed devfsadm -v -i vxportal it failed with exit status 1Solved1.9KViews0likes7Commentsvxdbd high cpu usage @ hpux
There is evidence vxdbd daemon is consuming too much cpu and following its behaviour has many EPIPE errors on writes. See first column: $ grep " write" ./pid.12331 | awk '{print $4,$5, $7, $9, $16,$17}' | sort | uniq -c | sort -rn 45355 pid=12331 ktid=20116 write err=32 A0=0 A1=6 45219 pid=12331 ktid=20117 write err=32 A0=0 A1=10 43862 pid=12331 ktid=20118 write err=32 A0=0 A1=11 40211 pid=12331 ktid=20121 write err=32 A0=0 A1=13 39893 pid=12331 ktid=20120 write err=32 A0=0 A1=12 We think restarting the service could eliminate such condition. Of course the above as a workaround. There is a patch which has some fixes for it , i.e. PHCO_41073; Before trying a restart of the service i.e. as per manual documentation: To start the vxdbd daemon Use the vxdbdctrl start command: /opt/VRTSdbcom/bin/vxdbdctrl start To stop the vxdbd daemon As root, use the vxdbdctrl stop command: /opt/VRTSdbcom/bin/vxdbdctrl stop Our question is: Q. Is it safe to restart vxdbd daemon with no colateral impact on production services due the nature orextension of vxdbd's functionality? Best Regards.Solved1.8KViews0likes1CommentCVM/CFS on Solaris
Dear All, I would like to create a VCS cluster of 2 nodes that will have the CFS mounted on both of them. Once the CFS is mounted into the cluster I want to get it shared to the system that is not part of VCS cluster. My question is it possible if not what could be the best way. Objective for this I want to test performance (read/write) on CFS as compare to NFS, QFS. I will appreciate your input.Solved1.5KViews1like5CommentsVeritas Storage Foundation 5.0 - Error code
Dear All, We have Veritas storage foundation 5.0 HA with VVR, VCS, VXVM, VXFS. We are interested to find the guide of all Error code of above mentioned product as there is available in Netbackup. Please reply with the link of the said guide. Regards, DharmeshSolved1.4KViews2likes9CommentsQDepth tuneup for DMP
Hi Am I right thinking setting the following in /etc/system which sets the QDepth and IO Timeout globally for the HBA's set sd:sd_io_time=0x3c set sd:sd_max_throttle=8 can be controlled at array level using vxdmpadm as per these tunable parameters https://sort.symantec.com/public/documents/sf/5.0/solaris/html/vxvm_admin/ag_ch_dmp_vm31.html#458394 and will this overide the system settings only for the specific Array and use the defaults or forced from /etc/system values for the rest? Any other insights on this would be great. ThanksSolved1.4KViews2likes4CommentsCalculate vxfs millisecond delay
We have a Netbackup master server cluster, the catalog/DB is sitting on a vxfs volume. There is a case open currently with Veritas and they have found massive delays (40 seconds!) at backup peak timeon the Sybase database and have recommended we do a "nbdb_unload -rebuild" and "nbdb_admin -reorganize". Meanwhile, we are looking at the IO on the catalog/DB volume. The technote below says the response shouldn't be more than 20ms. How do I determine the current response time, is there a Storage Foundation tool that can provide this? Our host is running RHEL6.6.Solved1.3KViews0likes2CommentsSFHA 6.1: Poor performance with VxFS
Hallo @all I'm facing poor performance with VxFS ( EMC LUNs, host-based-mirrored volume with logplex ): We compress and uncompress a 10GB file and here are the results: pri620[root]/database/sybase_backup {NSH}: time compress test_a real 1m43.920s user 1m8.303s sys 0m14.824s pri620[root]/database/sybase_backup {NSH}: time uncompress test_a.Z real 3m29.777s user 1m18.035s sys 0m14.054s Here the same but with two mirrored local disks and ZFS: pri620[root]/install {NSH}: time compress test real 1m29.672s user 1m14.996s sys 0m9.424s pri620[root]/install {NSH}: time uncompress test.Z real 1m27.838s user 1m19.017s sys 0m6.383s Uncompressing is much faster! When I create the 10GB file with mkfile I can see with iostat that write performance is about 180-200 MB/s, but uncompressing the file write performance is only about 55MB/s. Hardware Oracle T5-2, server instance is a "Logical Domain" I've read several documentation but I'm not able to get it faster on VxFS. The only way it was faster is a special mount option: mount -F vxfs -o tmplog,convosync=delay,mincache=tmpcache But this seems to be very unsafe. Any hints? Regards, HeinzSolved1.3KViews0likes2CommentsSetting mincache=direct and convosync=direct for VxFS on Solaris 10
Hi, I am helping a Solaris+VxFS customer to use directio for an application that does not use O_SYNC. The customer reported that only setting convosync=direct does not have much performance benifits, but by adding mincache=direct there is a significant performance gain. By definitions: "mincache=direct will cause any reads without the O_SYNC flag, or any writes without the O_SYNC flag, VX_DSYNC, VX_DIRECT, and VX_UNBUFFERED caching advisories, to be handled as if the VX_DIRECT caching advisory had been set." and "convosync=direct will cause any reads or writes with the O_SYNC flag to be handled as if the VX_DIRECT caching advisory had been set. " This application doesn't use read/write() with O_SYNC. My question is: Is "convosync=direct" mandatory for VxFS using directio? or Can we use "mincache=direct" only for VxFS directio? And why? Thanks in advance for any explanation.Solved1.2KViews0likes2Comments