SQL Wizard is unable to detect Service group
Hello Everyone, We have 2 nodes Veritas Stotage Foundastion HA 6.1 cluster. It contains one SQL Server 2012 service group. I try to modify this service group using SQL Server agent configuration wizard, but on the first step wizard is unable to detect existing SQL service group and offers just to create new one (screenshot is attached). I've tried to check wizard log but it doesn't contain any errors or warnings. Please help! P.S. OS: Windows Server 2012 VSF with HA 6.11.6KViews0likes1CommentSSIS in SQL cluster
Hello all, what is the recommended configuration for SSIS in a SQL cluster? If we develop the packages on a different non-clustered instance with BIDS, do I even need to install SSIS within the cluster environment? This is on a 6.0.1 cluster environment with SFWHA.681Views1like0CommentsDMP, MPIO, MSDSM, SCSI-3 and ALUA configuration settings
Ok, I'm somewhat confused and the more I read the more confused I think I'm getting. I'm going to be setting up a 4 node active/active cluster for SQL. All of the nodes will have 2 seperate fiber channel HBAs connecting through 2 seperate switches to our NetApp. The NetApp supports ALUA, so the storage guy wants to use it. It is my understanding that I need to use SCSI-3 to get this to work. Sounds good to me so far. My question is, do I need to use any of Microsoft's MPIO or MSDSM? This is on Win 2008 R2. Or does Veritas take care of all of that? Also, I read that in a new cluster set up, only connect 1 path first and then install and then connect the 2nd path and let Veritas detect it and configure it. Is that accurate? Any info or directions you can point me will be greatly appreciated. Thanks!SolvedSQL memory management in active/active configuration
Hi, I willhave 4 nodes in an active/active/active/active configuration. Each node will have 3 SQL instances installed on it. Each node has 256GB RAM. I know I can set the limit each instance can use, but the more memory SQL gets, the better it runs. Ideally, I would set each instance to use 80GB or so. Roughly 1/3 of the node's memory, leaving some for the OS and Veritas. What happens if one of the nodes goes down? Where would the newly failed over instance get it's memory from? Is there a way to manage this? Should I let SQL manage it? I don't want a run-away query on one instance to hog all the node's memory though and affect the other instances on that node. What are my options?Solved1.6KViews0likes2Commentsvxsnap create for SQL
Hi, I try to get a VSS snapshots for SQL, but I have a problem. This is my environment: I have distributed 10 database on 2 volumes and the logs in another one. E:\Database, with 5 database, size 100GB F:\Database, with 5 database, size 100GB J:\Logs_all_Database, with 10 logs for each database, size 50GB I have 3 volumes for the snapshots with the same size. If I take a snapshot for one database only this database quiesce, and get the snapshot with only one consistent database. My point is, can i get a snapshot for all the volume with only the name of the SQL instance, and get the quiesce of all databases on that volume? or I need to separate databases in individual volumes?Solved1.1KViews0likes3CommentsVeritas Storage Foundation for Windows 5.1 Volume Manager Disk Group fails to come online
We are trying to implement a 2 Node SQL server cluster on Windows 2008 R2 for last few days. We have so far successfully completed Windows cluster part and have tested quorum failover and single basic disk failover. Our challenge is that for shared storage between two nodes where we have got 30 LUNs from SAN but we need to have single drive. Since merging LUNs at SAN level is not an option for us and MSCS does not understand dynamic disk group created using winodws disk manager we have to use Veritas Storage Foundation for windows 5.1. We have successfully installed SFW 5.1 and created Cluster Dynamic Disk group and Volume on this disk group. This volume is visible in Windows Explorer without any issue. To add this disk group in cluster we followed below steps 1. In "Failover Cluster Manager" we created "Empty Service or Application" 2. Added "Volume Manager Disk Group" resource in the application. 3. After right clicking on the resource and selecting "Bring this resource online option" the disk group was brought online. To test the failover we rebooted the first node. The disk group failed over to the second node without any issue. To bring back disk group on node one again we restarted the second node, however this time on the first node the disk group could not come online with status was shown as "Failed". Since then we have tried rebooting both nodes alternatively, refreshed , rescaned disks but everything has failed to bring back the resource online. We have again followed all the steps from the start but same result. Can anybody suggest forward path?2.2KViews0likes4Comments