2 nic resources in a global cluster
hi all. I'm using an application in a global cluster environment, configured by a supplier. this application is using 2 NICs: one for the LAN of the management of the servers and one for the LAN of the management of the network and all the network elements. The supplier only configured a resource associated to the NIC of the network and the network elements so, in case of a fault of this NIC the application can switch, while in case of a fault of the NIC of the management the application freezes. This behaviour is obviously unacceptable, but the answer of the supplier was: the application doesn't support this feature (2 NIC resources together). Is this possible ? I know that you cannot know how the application works, but I think that the configuration of the NIC resources is totally indipendent of the application itself, in whatever way it works. Is this correct ? thanks in advance and BR Tiziano2.6KViews0likes1Commentfailed to bring up HeartBeat interface with GAB config
Dear Experts, I'm facing an issue here in cluster setup 8+2 I have managed to create 1+1 cluster and when trying to add the 3rd node I got stuck in gabconfig step below is the /var/adm/messages Jun 21 22:49:50 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStartCore: Starting hardware (clients None) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: Phy capabilities (1) (supported 0x00000f50 0x00000000) (link_config 0x01000000 0x00000000) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.warning] WARNING: bnxe3: Phy 10000fdx requested but not supported Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: Phy requesting link config of 0x00000006 Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: 10Gb Full Duplex Rx Flow ON Tx Flow ON Link Up Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStartCore: Hardware started (clients None) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStartL2: L2 started (clients L2) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStopL2: Stopping L2 (clients L2) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStopCore: Stopping hardware (clients None) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: Link Down Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStopCore: Hardware stopped (clients None) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStopL2: L2 stopped (clients None) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStartL2: Starting L2 (clients None) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStartCore: Starting hardware (clients None) Jun 21 22:49:52 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: Phy capabilities (1) (supported 0x00000f50 0x00000000) (link_config 0x01000000 0x00000000) Jun 21 22:49:52 MM-BL2 bnxe: [ID 801725 kern.warning] WARNING: bnxe3: Phy 10000fdx requested but not supported Jun 21 22:49:52 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: Phy requesting link config of 0x00000006 Jun 21 22:49:52 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: 10Gb Full Duplex Rx Flow ON Tx Flow ON Link Up Jun 21 22:49:52 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStartCore: Hardware started (clients None) Jun 21 22:49:52 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStartL2: L2 started (clients L2) Jun 21 22:49:53 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStopL2: Stopping L2 (clients L2) Jun 21 22:49:53 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStopCore: Stopping hardware (clients None) Jun 21 22:49:53 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: Link Down Jun 21 22:49:53 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStopCore: Hardware stopped (clients None) Jun 21 22:49:53 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStopL2: L2 stopped (clients None) knowing that I have the same network profiles interfaces for all blades ( HP Proliant blades ) any ideas how this can be fixed ? BR Mahmoud2KViews0likes1Commentvxconfigd: fatal: relocation error
Hi In my solaris box where I have VXVM,getting the below error-- root@abc # vxconfigd ld.so.1: vxconfigd: fatal: relocation error: file /etc/vx/lib/discovery.d/libvxemc.so: symbol ddl_asl_trace: referenced symbol not found VxVM vxconfigd ERROR V-5-1-0 Killed. root@abc# vxdg list VxVM vxdg ERROR V-5-1-684 IPC failure: Configuration daemon is not accessible root@abc # vxdisk list VxVM vxdisk ERROR V-5-1-684 IPC failure: Configuration daemon is not accessible root@abc # Daemon is not running.Help me to resolve this issue. regards ArupSolved3.2KViews1like7CommentsVVR paused due to network disconnection
Hi all. I have a global cluster with 2 minicluster systems with solaris 10 installed (SPARC) primary: MIVDB01S - 172.22.8.132 secondary: MILDB08S - 10.66.11.148 I stopped the secondary (init 0) for 3 days, and after that I startup the secondary (boot from ok-prompt) and after 1 day I checked for the situation of the replication but: MIVDB01S root vradmin -g datadg printrvg datarvg Replicated Data Set: datarvg Primary: HostName: 172.22.8.132 <localhost> RvgName: datarvg DgName: datadg Secondary: HostName: 10.66.11.148 RvgName: datarvg DgName: datadg vxrlink -g datadg status datarlk Wed Jan 21 09:51:08 2015 VxVM VVR vxrlink INFO V-5-1-12887 DCM is in use on rlink datarlk. DCM contains 874432 Kbytes (1%) of the Data Volume(s). vradmin -g datadg repstatus datarvg Replicated Data Set: datarvg Primary: Host name: 172.22.8.132 RVG name: datarvg DG name: datadg RVG state: enabled for I/O Data volumes: 1 VSets: 0 SRL name: srl_vol SRL size: 1.00 G Total secondaries: 1 Secondary: Host name: 10.66.11.148 RVG name: datarvg DG name: datadg Data status: consistent, behind Replication status: paused due to network disconnection (dcm resynchronization) Current mode: asynchronous Logging to: DCM (contains 874432 Kbytes) (SRL protection logging) Timestamp Information: N/A vxprint -Pl | grep flags flags: write enabled attached consistent disconnected asynchronous dcm_logging resync_paused ----- 1st solution MIVDB01S root vradmin -g datadg resync datarvg ----- 2nd solution Stop vradmin on secondary then on primary # /usr/sbin/vxstart_vvr stop Start vradmin on secondary then on primary # /usr/sbin/vxstart_vvr start Can you help me ?Solved6.9KViews0likes8CommentsSolaris 10 + Zones + ZFS + VCS
Hi all, I don't have a lot of experiences on VCS I have 2 servers on Solaris 10 update 11, installed on ZFS (rpool), and I want to do a Cluster for an application APP Node A = zonea (zonepath for zonea is ZFS / local disk ZFS ) Node B = zoneb (zonepath for zoneb is ZFS / loacl disk ZFS ) The application APP ( FailOver) have to move between zonea and zoneb With that configuration it is possible to do this ? If not what is the best practices : salaris 10 + zfs local zones + VCS 6.0.1 ? Best RegardsSolved1.7KViews0likes1Commentafter mirroring plex, ssh service are not running
hi all, i have some productions server that are using SFHA Cluster. i need to mirroring volume to new storage using. (#vxassist -g DGname mirror VOLname alloc="vxdisk1 vxdisk2 . . . . vxdisk13") after 190 minutes ssh service cannot be access. i aborted the process, because im afraid it will affected another service group and make server get panic and then reboot. if there anyone have some resolve plan like we mirroring in subdisk level?1.1KViews0likes1CommentHostmode setting on SunSTK 6180 after upgrade no longer supported
Goodafternoon, We are in the progress of doing an upgrade to our STKSUN 6180 disk array to firmware version 07.84.44.10. In the release notes of this firmware we run into the following challenge : Solaris Issues Solaris with Veritas DMP or other host type Bug 15840516—With the release of firmware 07.84.44.10, the host type ‘Solaris (with Veritas DMP or other)’ is no longer a valid host type. Workaround—If you are using Veritas with DMP, refer to Veritas support (http://www.symantec.com/support/contact_techsupp_static.jsp) for a recommended host type. What hosttype should we choose after the upgrade? Systems connected are running Veritas cluster with DMP. Please advise, RemcoSolvedvxddladm show DMP state as not active
Goodmorning, I have issue that I can't seem to solve and I'm dire need of assistance. I have veritas cluster 5.1 running on solaris 10 connected to an 6180 storage array. The array is directly connected to 2 hotsts. (no switch!). Controller port 1A is connected to host A controller port 1B is connected to host A controller port 2A is connected to host B controller port 2B is connected to host B. DMP is taking care of the multipathing bit and looks ok, however I see that the state is set to not active: Output fromt the vxddladm listsupport libname=libvxlsiall.so : LIB_NAME ASL_VERSION Min. VXVM version ========================================================== Libvxlsiall.so vm-5.1.100-rev-1 5.1 The output of the vxdmpadm list dmpEngenio : Filename: dmpEngenio APM name: dmpEngenio APM version: 1 Feature : VxVM VxVM version 51 Array Types Supporred: A/PF-LSI Depending Array Types A/P State : Not-Ative Output from vxdctl mode: mode : enabled. Both hosts show the same result state : Not-Active So my question is : How do I set the state to Active. Bare in mind that this is a full production system so I have make sure that any commands given will not disrupt production. I will schedule downtime in that case. Can someone assist me? Many thanks! Remco