Infoscale 7 for linux 7.1 is not confiured
Hi, i am trying to install and configure infoscale 7 on RHEL 7.1 on two physical servers. the product is installed on both nodes , but when trying to cinfigure to fencing mode, i faced an error ,Volume Manger is not runnig, when tried to start VxVm module, it will not start and i got:VxVM vxdisk ERROR V-5-1-684 IPC failure: Configuration daemon is not accessible knowing that on virtual machine, the configuration is working fine. anyone has any ideaSolved4.9KViews0likes6CommentsInfoscale command information
Hi, I need to know if there is a way how to track which disk with UDID is which drive/mount point? I have this exisiting environment with oracle server have storage cluster using infoscale storage foundation but i dont have any information all about the configuration or disk assignment. anyone can help?Solved4.6KViews0likes5CommentsNVMe drives disappear after upgrade to the RHEL7.7 kernel.
Hi, I'm using Infoscale 7.4.1.1300 on RHEL 7.x Tonight, as I was running RHEL7.7 with the latest RHEL7.6 kernel, I decided to upgrade to the RHEL7.7 kernel (the only part of 7.7 which was missing). This had the nasty side effect of making NVMe drives disappear. 1) before upgrade: # modinfo vxio filename: /lib/modules/3.10.0-957.27.2.el7.x86_64/veritas/vxvm/vxio.ko license: VERITAS retpoline: Y supported: external version: 7.4.1.1300 license: Proprietary. Send bug reports to enterprise_technical_support@veritas.com retpoline: Y rhelversion: 7.6 depends: veki vermagic: 3.10.0-957.el7.x86_64 SMP mod_unload modversions # vxdmpadm listctlr CTLR_NAME ENCLR_TYPE STATE ENCLR_NAME PATH_COUNT ========================================================================= c515 Samsung_NVMe ENABLED daltigoth_samsung_nvme1 1 c0 Disk ENABLED disk 3 # vxdisk list DEVICE TYPE DISK GROUP STATUS nvme0n1 auto:cdsdisk - (nvm01dg) online ssdtrim sda auto:LVM - - LVM sdb auto:cdsdisk loc01d00 local01dg online sdc auto:cdsdisk - (ssd01dg) online 2) after upgrade: # modinfo vxio filename: /lib/modules/3.10.0-1062.1.1.el7.x86_64/veritas/vxvm/vxio.ko license: VERITAS retpoline: Y supported: external version: 7.4.1.1300 license: Proprietary. Send bug reports to enterprise_technical_support@veritas.com retpoline: Y rhelversion: 7.7 depends: veki vermagic: 3.10.0-1062.el7.x86_64 SMP mod_unload modversions # vxdmpadm listctlr CTLR_NAME ENCLR_TYPE STATE ENCLR_NAME PATH_COUNT ========================================================================= c0 Disk ENABLED disk 3 # vxdisk list DEVICE TYPE DISK GROUP STATUS sda auto:LVM - - LVM sdb auto:cdsdisk loc01d00 local01dg online sdc auto:cdsdisk - (ssd01dg) online I've reverted to the latest z-stream RHEL7.6 kernel (3.10.0-957.27.2.el7) while I research this issue. Has this been reported already?4.6KViews0likes9CommentsFailed to install EAT on system
Am evaluating the Infoscale Availability tool. While install over RHEL6.4(On VMware) getting error says that "Failed to install EAT on system". Tried to install several times after proper uninstall and getting the same. Tried to install on single server option even the same error. Can anyone point how to resolve this issue.Solved4.3KViews1like12CommentsNew Infoscale v7 Installation - Can't add Hosts
This is a new system of infoscale v7 on RHEL v6. Bidirectional port 5634 is open between the Infoscale management Server (RHEL) and host. (solaris 10 -sparc). Also one way port 22 is open from mgmt server to managed host. Host has VRTSsfmh running and listening on port 5634: solvcstst01:/etc/ssh {root}: ps -ef|grep xprtld root 3893 1 0 Mar 01 ? 0:47 /opt/VRTSsfmh/bin/xprtld -X 1 /etc/opt/VRTSsfmh/xprtld.conf root 7477 24284 0 08:28:34 pts/1 0:00 grep xprtld I've allowed (temporary) direct root login from the mgmt server to the managed host and entered those credentials. Error when adding host from infoscale server: "Registration with Management Server failed" Error log: Add Host Log ------------ Started [04/12/2016 08:30:23] [04/12/2016 08:30:23] [solvcstst01.vch.ca] type rh solvcstst01.vch.ca cms [04/12/2016 08:30:23] [solvcstst01.vch.ca] creating task for Add host [04/12/2016 08:30:24] [solvcstst01.vch.ca] Check if MH is pingable from MS and get vital information from MH [04/12/2016 08:30:24] [solvcstst01.vch.ca] Output: { "XPRTLD_VERSION" : "5.0.196.0", "LOCAL_NAME" : "solvcstst01.vch.ca", "LOCAL_ADDR" : "139.173.8.6", "PEER_NAME" : "UNKNOWN", "PEER_ADDR" : "10.248.224.116", "LOCAL_TIME" : "1460475024", "LOCALE" : "UNKNOWN", "DOMAIN_MODE" : "FALSE", "QUIESCE_MODE" : "RUNNING", "OSNAME" : "SunOS", "OSRELEASE" : "5.10", "CPUTYPE" : "sparc", "OSUUID" : "{00020014-4ffa-b092-0000-000084fbfc3f}", "DOMAINS" : { } } [04/12/2016 08:30:24] [solvcstst01.vch.ca] Return code: 0 [04/12/2016 08:30:24] [solvcstst01.vch.ca] Checking if MH version [5.0.196.0] is same or greater than as that of least supported MH version [5.0.0.0] [04/12/2016 08:30:24] [solvcstst01.vch.ca] ADD_HOST_PRECONFIG_CHK [04/12/2016 08:30:24] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_PRECONFIG_CHK","STATE":"SUCCESS","PROGRESS":1}} [04/12/2016 08:30:24] [solvcstst01.vch.ca] retrieving Agent password [04/12/2016 08:30:24] [solvcstst01.vch.ca] ADD_HOST_INPUT_PARAM_CHK [04/12/2016 08:30:24] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INPUT_PARAM_CHK","STATE":"SUCCESS","PROGRESS":6}} [04/12/2016 08:30:24] [solvcstst01.vch.ca] user name is "root" [04/12/2016 08:30:24] [solvcstst01.vch.ca] ADD_HOST_CONTACTING_MH [04/12/2016 08:30:24] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_CONTACTING_MH","STATE":"SUCCESS","PROGRESS":20}} [04/12/2016 08:30:25] [solvcstst01.vch.ca] Output: HTTP/1.1 302 OK Status: 307 Moved Location: /admin/htdocs/cs_config.htm [04/12/2016 08:30:25] [solvcstst01.vch.ca] Return code: 768 [04/12/2016 08:30:25] [solvcstst01.vch.ca] Checking to see if CS is reachable from MH [04/12/2016 08:30:25] [solvcstst01.vch.ca] Output: { "XPRTLD_VERSION" : "7.0.0.0", "LOCAL_NAME" : "lvmvom01.healthbc.org", "LOCAL_ADDR" : "10.248.224.116", "PEER_NAME" : "solvcstst01.vch.ca", "PEER_ADDR" : "139.173.8.6", "LOCAL_TIME" : "1460475025", "LOCALE" : "en_US.UTF-8", "DOMAIN_MODE" : "TRUE", "QUIESCE_MODE" : "RUNNING", "OSNAME" : "Linux", "OSRELEASE" : "2.6.32-573.22.1.el6.x86_64", "CPUTYPE" : "x86_64", "OSUUID" : "{00010050-56ad-1e25-0000-000000000000}", "DOMAINS" : { "sfm://lvmvom01.healthbc.org:5634/" : { "admin_url" : "vxss://lvmvom01.healthbc.org:14545/sfm_admin/sfm_domain/vx", "primary_broker" : "vxss://lvmvom01.healthbc.org:14545/sfm_agent/sfm_domain/vx" } } } [04/12/2016 08:30:25] [solvcstst01.vch.ca] Return code: 0 [04/12/2016 08:30:25] [solvcstst01.vch.ca] CS host (lvmvom01.healthbc.org) is resolvable [04/12/2016 08:30:25] [solvcstst01.vch.ca] Trying to figure out if host is already part of the domain [04/12/2016 08:30:25] [solvcstst01.vch.ca] ADD_HOST_SEND_CRED_MH [04/12/2016 08:30:25] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_SEND_CRED_MH","STATE":"SUCCESS","PROGRESS":30}} [04/12/2016 08:30:26] [solvcstst01.vch.ca] Output: SUCCESS [04/12/2016 08:30:26] [solvcstst01.vch.ca] Return code: 0 [04/12/2016 08:30:28] [solvcstst01.vch.ca] push_exec command succeeded [/opt/VRTSsfmh/bin/getvmid_script] [04/12/2016 08:30:29] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:29] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":75}} [04/12/2016 08:30:29] [solvcstst01.vch.ca] Executing /opt/VRTSsfmh/bin/xprtlc -u "root" -t 1200 -j /var/opt/VRTSsfmh/xprtlc-payload-x2s4xFEb -l https://solvcstst01.vch.ca:5634/admin/cgi-bin/sfme.pl operation=configure_mh&cs-hostname=lvmvom01.healthbc.org&cs-ip=10.248.224.116&mh-hostname=solvcstst01.vch.ca&agent-password=****** [04/12/2016 08:30:32] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:32] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:32] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:33] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:33] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:33] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:45] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:45] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:45] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:56] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:56] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:56] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:56] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:56] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:56] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:57] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:57] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:57] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:57] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:57] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:57] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:57] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:57] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:57] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:58] [solvcstst01.vch.ca] Waiting for output from configure_mh---- [04/12/2016 08:30:58] [solvcstst01.vch.ca] ADD_HOST_INIT_DISCOVERY [04/12/2016 08:30:58] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":0,"ERROR":"Success","NAME":"add_host","OUTPUT":"ADD_HOST_INIT_DISCOVERY","STATE":"SUCCESS","PROGRESS":80}} [04/12/2016 08:30:58] [solvcstst01.vch.ca] fancy_die [04/12/2016 08:30:58] [solvcstst01.vch.ca] CONFIGURE_MH_REG_FAILED [04/12/2016 08:30:58] [solvcstst01.vch.ca] {"JOB":{"RETURNCODE":-1,"ERROR":"CONFIGURE_MH_REG_FAILED","NAME":"job_add_host","OUTPUT":"","STATE":"FAILED","PROGRESS":100}}{"RESULT":{"RETURNCODE":-1,"UMI":"V-383-50513-5760","ERROR":"CONFIGURE_MH_REG_FAILED","NAME":"add_host","TASKID":"{iHUXu2IK1ZRkTo7H}"}} [04/12/2016 08:30:58] [solvcstst01.vch.ca] fancy_deadSolved3.3KViews0likes3CommentsInfoscale cluster software installation aborted due to netbackup client
Hello All, Facing the issue with the Infoscale 7.0 Veritas RAC software installation, the installation got aborted with error, "CPI ERROR V-9-40-6501 Entered systems have different products installed: Product Installed - Product Version - System Name None - None - hostname InfoScale Enterprise - 7.0.0.000 - hostname Systems running different products must be operated independently The following warnings were discovered on the systems: CPI WARNING V-9-40-3861 NetBackup 7.6.0.4 was installed on hostname. The VRTSpbx rpms on hostname will not be uninstalled " Is thera anybody face this issue earlier. We have Linux 6.6 host where we get this error/issue. Need help to get this resolve.Solved2.6KViews0likes3Comments- 2.3KViews0likes3Comments
Problems installing Infoscale Storage
Hello, I am having a problem installing Veritas Infoscale Foundation on my openSUSE 12. Here follows the Iinstaller Summary: The follower warnings were discovered on the system: Veritas Infoscale Enterprise Install did not complete succesfully VRTSvxvm rpm failed to install on linux-kaoc VRTSaslapm rpm failed to install on linux-kaoc VRTSglm rpm failed to install on linux-kaoc VRTScavf rpm failed to install on linux-kaoc Can you help me with this issue ?Solved1.9KViews1like1CommentVeritas InfoScale 7.0: Configuring I/O fencing
I/O fencing protects the data on shared disks when nodes in a cluster detect a change in the cluster membership that indicates a split-brain condition. In a partitioned cluster, a split-brain condition occurs where one side of the partition thinks the other side is down and takes over its resources. When you install Veritas InfoScale Enterprise, the installer installs the I/O fencing driver, which is part of the VRTSvxfen package. After you install and configure the product, you must configure I/O fencing so that it can protect your data on shared disks. You can configure disk-based I/O fencing, server-based I/O fencing, or majority-based I/O fencing. Before you configure I/O fencing, make sure that you meet the I/O fencing requirements. After you meet the requirements, you can refer to About planning to configure I/O fencing to perform the preparatory tasks and then configure I/O fencing. For more details about I/O fencing configuration, see: Cluster Server Configuration and Upgrade Guide InfoScale documentation for other platforms can be found on the SORT website.1.8KViews0likes0Comments