How much space is required in a VxFS file system for the recover file as used by the fscdsconv utility
Hi all, I'm going to migrate a VXVM/VXFS (vers 5.1SP1RP4) volume/file system Solaris Sparc platform to VXVM/VXFS 6.03 Linux 6.5 platform, the file system size is 2TB space, real 1TB.... how much space is required for utility fscdsconv during conversion, there is a formula for this job? Thanks a lot Cecilia1KViews4likes1CommentCalculate vxfs millisecond delay
We have a Netbackup master server cluster, the catalog/DB is sitting on a vxfs volume. There is a case open currently with Veritas and they have found massive delays (40 seconds!) at backup peak timeon the Sybase database and have recommended we do a "nbdb_unload -rebuild" and "nbdb_admin -reorganize". Meanwhile, we are looking at the IO on the catalog/DB volume. The technote below says the response shouldn't be more than 20ms. How do I determine the current response time, is there a Storage Foundation tool that can provide this? Our host is running RHEL6.6.Solved1.3KViews0likes2CommentsVeritas Storage Foundation - Volume Disabled After 'rmdisk'
Dear All, I added a LUN to a specific volume and realised that I added it to the wrong volume. To remove the LUN the following command was run : "vxdg -g dg rmdisk vsp-xx-xx" I was then prompted to run the " -k" option to remove the disk. However after re-running the command with the " -k" option : "vxdg -g dg -k rmdisk vsp-xx-xx" ... the volume went into a disabled state. Fortunately no data was lost once the "vxmend" was completed on the volume. I would just like to know if this was to be expected when running the above with the " -k" option ? RegardsSolved2KViews0likes4CommentsSFHA 6.1: Poor performance with VxFS
Hallo @all I'm facing poor performance with VxFS ( EMC LUNs, host-based-mirrored volume with logplex ): We compress and uncompress a 10GB file and here are the results: pri620[root]/database/sybase_backup {NSH}: time compress test_a real 1m43.920s user 1m8.303s sys 0m14.824s pri620[root]/database/sybase_backup {NSH}: time uncompress test_a.Z real 3m29.777s user 1m18.035s sys 0m14.054s Here the same but with two mirrored local disks and ZFS: pri620[root]/install {NSH}: time compress test real 1m29.672s user 1m14.996s sys 0m9.424s pri620[root]/install {NSH}: time uncompress test.Z real 1m27.838s user 1m19.017s sys 0m6.383s Uncompressing is much faster! When I create the 10GB file with mkfile I can see with iostat that write performance is about 180-200 MB/s, but uncompressing the file write performance is only about 55MB/s. Hardware Oracle T5-2, server instance is a "Logical Domain" I've read several documentation but I'm not able to get it faster on VxFS. The only way it was faster is a special mount option: mount -F vxfs -o tmplog,convosync=delay,mincache=tmpcache But this seems to be very unsafe. Any hints? Regards, HeinzSolved1.3KViews0likes2CommentsRUNQ values increased after upgrade to SFHA 6.1/RHEL5
Hi all, IHAC who has upgraded their SF HA 6.1 on RHEL 5 and noticed their RUNQ increased four times (0.1 avg to 0.4 avg). AFAIK, nothing has been changed during the upgrade. I'm wondering if it's an isolated case or expected behavior?Solved874Views1like3CommentsCVM/CFS on Solaris
Dear All, I would like to create a VCS cluster of 2 nodes that will have the CFS mounted on both of them. Once the CFS is mounted into the cluster I want to get it shared to the system that is not part of VCS cluster. My question is it possible if not what could be the best way. Objective for this I want to test performance (read/write) on CFS as compare to NFS, QFS. I will appreciate your input.Solved1.5KViews1like5CommentsTroubleshooting volume performance in Storage Foundation
Hi, all. We have released a set of articles about troubleshooting volume performance in Storage Foundation. Here is the link: "Troubleshooting volume performance in Veritas Storage Foundation" http://www.symantec.com/docs/TECH202712 Since this is a broad topic, the "technote" is actually a set of about a dozen article that have been organized into a logical tree structure, with TECH202712 at its "root." Let us know what you think! Regards, Mike754Views3likes0CommentsShrinking large filesystem spanned across multiple luns
Hello, we have a mirrored/raid1 volume of 1TB spanned across two arrays (80 luns from each array) and its usage is now 400G (at some point usage was 850G). We need to shrink the filesystem to 500G so that we can utilize the space for new filesystem. I understand vxresize helps in this case and it will attempt to remove most recently added LUNs similar to stack LIFO mode. I am under impression that if the data extents are on the recently added LUN, that will be evacuated to free blocks on the old LUN and it will take considerable amount of time to complete the resize operation. I guess defragmenation prior the resize helps. I would like to know/estimate the resize operation time and wondering if there is any way to identify howmuch data is stored on a particular LUNs. Thanks in Advance.Solved723Views1like1CommentThe vxdclid daemon core dumps on AIX 7.1 hosts with Storage Foundation 4.0 MP4
Hi, please send me solutions on the following topics. The vxdclid daemon core dumps on AIX hosts with Storage Foundation 4.0 MP4 This is the output: LABEL: CORE_DUMP IDENTIFIER: A924A5FC Date/Time: Tue Feb 5 06:38:24 USAST 2013 Sequence Number: 14290551 Machine Id: 00C8468E4C00 Node Id: edmerpr2 Class: S Type: PERM WPAR: Global Resource Name: SYSPROC Description SOFTWARE PROGRAM ABNORMALLY TERMINATED Probable Causes SOFTWARE PROGRAM User Causes USER GENERATED SIGNAL Recommended Actions CORRECT THEN RETRY Failure Causes SOFTWARE PROGRAM Recommended Actions RERUN THE APPLICATION PROGRAM IF PROBLEM PERSISTS THEN DO THE FOLLOWING CONTACT APPROPRIATE SERVICE REPRESENTATIVE Detail Data SIGNAL NUMBER 11 USER'S PROCESS ID: 34080660 FILE SYSTEM SERIAL NUMBER 4 INODE NUMBER 501817 CORE FILE NAME /var/opt/VRTSsfmh/core PROGRAM NAME vxdclid LABEL: CORE_DUMP IDENTIFIER: A924A5FC Date/Time: Tue Feb 5 06:38:24 USAST 2013 Sequence Number: 14290551 Machine Id: 00C8468E4C00 Node Id: edmerpr2 Class: S Type: PERM WPAR: Global Resource Name: SYSPROC Description SOFTWARE PROGRAM ABNORMALLY TERMINATED Probable Causes SOFTWARE PROGRAM User Causes USER GENERATED SIGNAL Recommended Actions CORRECT THEN RETRY Failure Causes SOFTWARE PROGRAM Recommended Actions RERUN THE APPLICATION PROGRAM IF PROBLEM PERSISTS THEN DO THE FOLLOWING CONTACT APPROPRIATE SERVICE REPRESENTATIVE Detail Data SIGNAL NUMBER 11 USER'S PROCESS ID: 34080660 FILE SYSTEM SERIAL NUMBER 4 INODE NUMBER 501817 CORE FILE NAME /var/opt/VRTSsfmh/core PROGRAM NAME vxdclidSolved851Views0likes1Commentvxstat: time since last reset?
Hello, I wonder if there is a way to determine the time since the last reset ('vxstat -r') of vxstat counters for some or all VxVM objects. Or asked differently: When vxstat gives me data like [root:/]# vxstat OPERATIONS BLOCKS AVG TIME(ms) TYP NAME READ WRITE READ WRITE READ WRITE vol export 3423282 38233203 159756362 400100767 5.6 25.4 vol rootvol 8105850 30419736 73677793 65981077 2.8 36.8 vol swapvol 2636206 340972 42179296 55502576 7.4 5597.9 vol var 5190036 12921735 116372454 126819449 5.9 14.1 how can I be sure these counts accumulated since reboot of the server? Without the knowledge if/when the counters were reset the last time, the output of vxstat is of limited use. Instead, I have to reset the counters myself, thereby losing potentially valuable data that has accumulated over a long time. So is there a way to determine the time of the last reset? This example is from a very old VxVM 4.1 on Solaris 10, but the vxstat man page from 5.1 has no info about the time of the last reset either. KR JochenSolved701Views0likes1Comment