shutdown and restart Cluster with VEA
Hi all, I have to shutdown our Hardware for a while. Now I'm looking for the best way to do that. I have two storage SAN's connected to two hardware micosoft clusternodes(Windows Server 2003) with VEA 3.2. I will shutdown the sytsems this way: 1. passive Cluster node 2. aktiv cluster node 3. first san controller 4. second san controller 5. both storage's 6. FB switche and the restart this way: 1. FB switche 2. both storage's 3. both storage san controller 4. both cluster node's Are there any things i have to regard? Do the VEA start the to resync after the restart for hours? or it will only reconnect. I hope you can help me. Thank you!1.8KViews0likes3Commentsglobal group takes a long time to go online
Hi everyone, I'm facing the following problem related with the time to go online for a service group that is configured as a Global Group, in two clusters (Cluster1 and Cluster2) on which the group can come accross (each cluster only has a single server), it is taking a longer time to the service group to come online under a failover event. I am simulating a failover event on the server that is running the service group turning off it, to simulate a power supply cut off on that station! Please is there a parameter that I could modify to reduce this time? Attached you will find the service group relationship and its properties! Thank you Marlon1.6KViews1like5CommentsImproving disk mirroring
Hi Need some advice improving disk mirroring using SFW. According to the documentation you use “Configuration task performance tuning” for migrations only. Can it be used to improve day to disk mirroring? We already disabled Task Throttling and enabled Smart Move. Basically it takes us 20H to resyn a 2TB mirror and we hope to improve the speed.Solved1.1KViews0likes2CommentsVVR -DCM log fails to completely drain
Hi My replication runs fine for an day or two. Then replication stops. DCM log stuck on 2% and grows.Article:TECH55550it’s because of an SRL overflow. To resolve we need to disable access to the volume. We cannot do this every second day. This is a very critical system, we cannot afford any down time. Can it be that VVR is not suitable for our environment? How can I determine the best Data transfer solution between production DC and DR DC? Using SAN mirroring is an option, but how will it handle regions that gets changed all the time? Our environment setup looks like this: 10GB link between DC's DB size 1.6TB Data change daily +- 300GB Thanks MariusSolved876Views0likes2CommentsHow does mirroring work within Storage Foundation
How does storage foundation mirroring works (technical deep dive)? How does it sync, Synchronous or asynchronous? In other words, I have a LUN in Datacentre 1 mirrored with a LUN in Datacentre 2. Datacentre 2 is 50KM away from Datacentre 1. (Just an example) Will this slow down the system performance?Solved1.5KViews1like3CommentsPoor disk write performance
We have a server 2003 R2 machine, which primarily is running as our NetBackup Master server. In addition to this we have a second machine which is/was setup as in a cluster using Veritas Storage Foundations VVR to replicate the data and VCS to handle the application side of things. About three weeks ago we noticed that the disk write performance was very bad. Using a tool provided by NetBackup to write zero's to the file systems, we noted that it would start off fast, and then performance would drop significantly to the point where it was writing about 0Mb/s or 1Mb/s. This is present on both SAN volumes and local volumes. The same write profile is observed. We also see disk queue lengths go up to 130, but no higher than this This is the case on both of the servers running VCS/VSF, but is not present on any other servers running NBU but not running the storage foundation software. There is no VxCache memory config enabled on the server or any disks. All the disks are reporting as being misaligned. Some time ago VVR was disabled/broken and has not been reinstated as we do not have Symantec support on the product, and noone here knows how to use it. So I am reaching out to anyone who may be able to point me in the right direction. I have read the admin guide and it has given me a bit more understanding of the components in the software, but has not provided much in the way of troubleshooting this write performance problem. Today I will run IOMeter to test reads/writes and a combination of both. VSC/VSF version is 5.1 SP11.5KViews0likes3CommentsStorage foundation for Windows bandwidth issue..
Hi, We have brought the below products:- 1. VRTS STORAGE FOUNDATION 6.0 WIN FOR OS TIER STANDARD EDITION PER SERVER STD LIC EXPRESS BAND S. 2. VRTS STORAGE FOUNDATION OPTION VOLUME REPLICATOR 6.0 WIN FOR OS TIER STANDARD EDITION PER SERVER STD LIC EXPRESS BAND S The above product is brought for both the site. Now , we have the following to replicate:- 1. Operating System - windows 2008 R2 Ent. 2. Drive - C:, D: E: 3. Oracle 11.2g Database installed on this system. (installation is in C Drive and database is in D Drive). 4. Bandwidth - Shared 24 Mbps/18-20 ms. 5. All drive are local drive in RAID configuration. 6. Database size is 15-16 GB I am very much new to Storage foundation. I have the below query:- 1. Will the above license will work for the drive replication to a different geographical location. 2. Will the bandwidth is enough to conduct the replication. Is there any installation "getting started' short of docs. Is there anything I need to check before installation. What could be the performance of the same. Any pros and corns.... Thanks.Every 64-bit Windows system with QLogic FC HBAs is under-performing
Shock headline, lots of caveats. April 2013 release of QLogic Windows device driver for 2[4-6]xx Fibre Channel HBAs (9.1.11.20) uses 64-bit DMA addressing. Presumably previous versions (<=9.1.10.28) use 32-bit DMA addressing. Supporting evidence from Release Notes: Added support support for full 64 bit physical addressing [ER101742] Have not talked with anyone from QLogic to confirm this, but seeing the above in the Release Notes was an a-ha! moment. I believe it because I have lots of observations that fit. Specifically using 100% of a core per 200 MB/s I/O stream (probably doing buffer copying and interrupt handling) and extreme volatility of I/O speeds under load. Now I know why Emulex cards perform better in Windows 2008 R2 environments.582Views0likes0Commentsdata migration by mirroring of volume - How is peformance affected ?
Our customer is considering to migrate data from old storage to new one with SFW. He is considering to do it during production time (so apps will have access to volume). They said, they used some other tool and performance (response time) was so degraded, they had to stop it. I dont know what is the amount of data to be migrated, but I think that this would be the reason of why they consider migrating it during production. Is it possible to use SFW to migrate data during production and how is performance degraded ? AFAIK it is possible to create mirror during normal use of volume, but it is not recommended to use it a lot. In documentation I found: "Adding mirrors to a volume involves a certain amount of time for mirror resynchronization. Take care not to perform actions on the volume until the command is complete." There is also "Also, the generation of multiple mirrors does affect system resources" Maybe "smartmove" or "fastresync" would help ? any experiences with this ?Solved734Views0likes2Comments