services not started after node reboot in VCS 5.1
Hello, we have a 4 node VCS 5.1 cluster (EngineVersion 5.1.10.40) configured with I/O fencing. 2 of the nodes are for the applications and the other 2 for the DB. Applications and the DB are configured to run only on the designated nodes. We are testing a complete heartbeat failure by shutting down the switch. The fencing works properly and three nodes are rebooted. We have observed a situation where 2 of the services was not started after both application nodes have rebooted. The service was in PARTIAL state. mmsoap-rg State lpdmc1p |PARTIAL| mmsoap-rg State lpdmc2p |OFFLINE| smppc-rg State lpdmc1p |PARTIAL| smppc-rg State lpdmc2p |OFFLINE| Here it is their configuration: group mmsoap-rg ( SystemList = { lpdmc1p = 0, lpdmc2p = 1 } AutoStartList = { lpdmc1p, lpdmc2p } ) IP mmsoap-lh-res ( Device = bond0 Address = "10.40.248.199" NetMask = "255.255.255.224" ) LVMLogicalVolume opt-mmsoap-lv-res ( VolumeGroup = mmsoap-vg LogicalVolume = opt-mmsoap-lv ) LVMLogicalVolume var-opt-mmsoap-lv-res ( VolumeGroup = mmsoap-vg LogicalVolume = var-opt-mmsoap-lv ) LVMVolumeGroup mmsoap-vg-res ( VolumeGroup = mmsoap-vg EnableLVMTagging = 1 ) Mount opt-mmsoap-mnt-res ( MountOpt = "rw,noatime,nodiratime,nosuid,nodev" FsckOpt = "-y" BlockDevice = "/dev/mapper/mmsoap--vg-opt--mmsoap--lv" MountPoint = "/opt/mmsoap" FSType = ext3 ) Mount var-opt-mmsoap-mnt-res ( MountOpt = "rw,noatime,nodiratime,nosuid,nodev" FsckOpt = "-y" BlockDevice = "/dev/mapper/mmsoap--vg-var--opt--mmsoap--lv" MountPoint = "/var/opt/mmsoap" FSType = ext3 ) NIC mmsoap-nic-res ( Device = bond0 ) SicapApplication mmsoap-app-res ( AppUser = mmsoap ) requires group smppc-rg online global firm mmsoap-app-res requires mmsoap-lh-res mmsoap-app-res requires opt-mmsoap-mnt-res mmsoap-app-res requires var-opt-mmsoap-mnt-res mmsoap-lh-res requires mmsoap-nic-res opt-mmsoap-lv-res requires mmsoap-vg-res opt-mmsoap-mnt-res requires opt-mmsoap-lv-res var-opt-mmsoap-lv-res requires mmsoap-vg-res var-opt-mmsoap-mnt-res requires var-opt-mmsoap-lv-res // resource dependency tree // // group mmsoap-rg // { // SicapApplication mmsoap-app-res // { // IP mmsoap-lh-res // { // NIC mmsoap-nic-res // } // Mount opt-mmsoap-mnt-res // { // LVMLogicalVolume opt-mmsoap-lv-res // { // LVMVolumeGroup mmsoap-vg-res // } // } // Mount var-opt-mmsoap-mnt-res // { // LVMLogicalVolume var-opt-mmsoap-lv-res // { // LVMVolumeGroup mmsoap-vg-res // } // } // } // } group smppc-rg ( SystemList = { lpdmc1p = 0, lpdmc2p = 1 } AutoStartList = { lpdmc1p, lpdmc2p } ) IP smppc-lh-res ( Device = bond0 Address = "10.40.248.200" NetMask = "255.255.255.224" ) LVMLogicalVolume opt-smppc-lv-res ( VolumeGroup = smppc-vg LogicalVolume = opt-smppc-lv ) LVMLogicalVolume var-opt-smppc-lv-res ( VolumeGroup = smppc-vg LogicalVolume = var-opt-smppc-lv ) LVMVolumeGroup smppc-vg-res ( VolumeGroup = smppc-vg EnableLVMTagging = 1 ) Mount opt-smppc-mnt-res ( MountOpt = "rw,noatime,nodiratime,nosuid,nodev" FsckOpt = "-y" BlockDevice = "/dev/mapper/smppc--vg-opt--smppc--lv" MountPoint = "/opt/smppc" FSType = ext3 ) Mount var-opt-smppc-mnt-res ( MountOpt = "rw,noatime,nodiratime,nosuid,nodev" FsckOpt = "-y" BlockDevice = "/dev/mapper/smppc--vg-var--opt--smppc--lv" MountPoint = "/var/opt/smppc" FSType = ext3 ) NIC smppc-nic-res ( Device = bond0 ) SicapApplication smppc-app-res ( AppUser = smppc ) requires group mmg-rg online global firm opt-smppc-lv-res requires smppc-vg-res opt-smppc-mnt-res requires opt-smppc-lv-res smppc-app-res requires opt-smppc-mnt-res smppc-app-res requires smppc-lh-res smppc-app-res requires var-opt-smppc-mnt-res smppc-lh-res requires smppc-nic-res var-opt-smppc-lv-res requires smppc-vg-res var-opt-smppc-mnt-res requires var-opt-smppc-lv-res // resource dependency tree // // group smppc-rg // { // SicapApplication smppc-app-res // { // Mount opt-smppc-mnt-res // { // LVMLogicalVolume opt-smppc-lv-res // { // LVMVolumeGroup smppc-vg-res // } // } // IP smppc-lh-res // { // NIC smppc-nic-res // } // Mount var-opt-smppc-mnt-res // { // LVMLogicalVolume var-opt-smppc-lv-res // { // LVMVolumeGroup smppc-vg-res // } // } // } // } The state of its resources: #Resource Attribute System Value mmsoap-lh-res State lpdmc1p OFFLINE mmsoap-lh-res State lpdmc2p OFFLINE opt-mmsoap-lv-res State lpdmc1p ONLINE opt-mmsoap-lv-res State lpdmc2p OFFLINE var-opt-mmsoap-lv-res State lpdmc1p ONLINE var-opt-mmsoap-lv-res State lpdmc2p OFFLINE mmsoap-vg-res State lpdmc1p OFFLINE mmsoap-vg-res State lpdmc2p OFFLINE opt-mmsoap-mnt-res State lpdmc1p OFFLINE opt-mmsoap-mnt-res State lpdmc2p OFFLINE var-opt-mmsoap-mnt-res State lpdmc1p OFFLINE var-opt-mmsoap-mnt-res State lpdmc2p OFFLINE mmsoap-nic-res State lpdmc1p ONLINE mmsoap-nic-res State lpdmc2p ONLINE mmsoap-app-res State lpdmc1p OFFLINE mmsoap-app-res State lpdmc2p OFFLINE smppc-lh-res State lpdmc1p OFFLINE smppc-lh-res State lpdmc2p OFFLINE opt-smppc-lv-res State lpdmc1p ONLINE opt-smppc-lv-res State lpdmc2p OFFLINE var-opt-smppc-lv-res State lpdmc1p ONLINE var-opt-smppc-lv-res State lpdmc2p OFFLINE smppc-vg-res State lpdmc1p OFFLINE smppc-vg-res State lpdmc2p OFFLINE opt-smppc-mnt-res State lpdmc1p OFFLINE opt-smppc-mnt-res State lpdmc2p OFFLINE var-opt-smppc-mnt-res State lpdmc1p OFFLINE var-opt-smppc-mnt-res State lpdmc2p OFFLINE smppc-nic-res State lpdmc1p ONLINE smppc-nic-res State lpdmc2p ONLINE smppc-app-res State lpdmc1p OFFLINE smppc-app-res State lpdmc2p OFFLINE In the engine log I had the following messages related to them: 2016/04/05 02:27:45 VCS NOTICE V-16-1-10181 Group mmsoap-rg AutoRestart set to 1 2016/04/05 02:27:46 VCS INFO V-16-1-10304 Resource mmsoap-lh-res (Owner: Unspecified, Group: mmsoap-rg) is offline on lpdmc1p (First probe) 2016/04/05 02:27:46 VCS INFO V-16-1-10297 Resource opt-mmsoap-lv-res (Owner: Unspecified, Group: mmsoap-rg) is online on lpdmc1p (First probe) 2016/04/05 02:27:46 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group mmsoap-rg on all nodes 2016/04/05 02:27:46 VCS INFO V-16-1-10297 Resource var-opt-mmsoap-lv-res (Owner: Unspecified, Group: mmsoap-rg) is online on lpdmc1p (First probe) 2016/04/05 02:27:46 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group mmsoap-rg on all nodes 2016/04/05 02:27:46 VCS INFO V-16-1-10304 Resource opt-mmsoap-mnt-res (Owner: Unspecified, Group: mmsoap-rg) is offline on lpdmc1p (First probe) 2016/04/05 02:27:46 VCS INFO V-16-1-10304 Resource var-opt-mmsoap-mnt-res (Owner: Unspecified, Group: mmsoap-rg) is offline on lpdmc1p (First probe) 2016/04/05 02:27:47 VCS INFO V-16-1-10304 Resource mmsoap-app-res (Owner: Unspecified, Group: mmsoap-rg) is offline on lpdmc1p (First probe) 2016/04/05 02:27:48 VCS INFO V-16-1-10304 Resource mmsoap-vg-res (Owner: Unspecified, Group: mmsoap-rg) is offline on lpdmc1p (First probe) 2016/04/05 02:27:48 VCS NOTICE V-16-1-10438 Group mmsoap-rg has been probed on system lpdmc1p if I manually onlined the group that was OK but I had to separately online both: 2016/04/05 05:58:03 VCS INFO V-16-1-50135 User root fired command: hagrp -online -any smppc-rg localclus from localhost 2016/04/05 05:58:03 VCS NOTICE V-16-1-10301 Initiating Online of Resource smppc-lh-res (Owner: Unspecified, Group: smppc-rg) on System lpdmc1p 2016/04/05 05:58:03 VCS NOTICE V-16-1-10301 Initiating Online of Resource smppc-vg-res (Owner: Unspecified, Group: smppc-rg) on System lpdmc1p 2016/04/05 05:58:03 VCS NOTICE V-16-1-10301 Initiating Online of Resource opt-smppc-mnt-res (Owner: Unspecified, Group: smppc-rg) on System lpdmc1p 2016/04/05 05:58:03 VCS NOTICE V-16-1-10301 Initiating Online of Resource var-opt-smppc-mnt-res (Owner: Unspecified, Group: smppc-rg) on System lpdmc1p 2016/04/05 05:58:04 VCS ERROR V-16-10031-14001 (lpdmc1p) LVMVolumeGroup:smppc-vg-res:online:Activation of volume group failed. 2016/04/05 05:58:05 VCS INFO V-16-1-10298 Resource opt-smppc-mnt-res (Owner: Unspecified, Group: smppc-rg) is online on lpdmc1p (VCS initiated) 2016/04/05 05:58:05 VCS INFO V-16-1-10298 Resource var-opt-smppc-mnt-res (Owner: Unspecified, Group: smppc-rg) is online on lpdmc1p (VCS initiated) 2016/04/05 05:58:06 VCS INFO V-16-1-10298 Resource smppc-vg-res (Owner: Unspecified, Group: smppc-rg) is online on lpdmc1p (VCS initiated) 2016/04/05 05:58:16 VCS INFO V-16-1-10298 Resource smppc-lh-res (Owner: Unspecified, Group: smppc-rg) is online on lpdmc1p (VCS initiated) 2016/04/05 05:58:16 VCS NOTICE V-16-1-10301 Initiating Online of Resource smppc-app-res (Owner: Unspecified, Group: smppc-rg) on System lpdmc1p 2016/04/05 05:58:16 VCS INFO V-16-1-0 (lpdmc1p) SicapApplication:???:???:Running preonline for resource smppc-app-res 2016/04/05 05:58:16 VCS INFO V-16-1-0 (lpdmc1p) SicapApplication:???:???:Preonline for resource smppc-app-res finished 2016/04/05 05:58:16 VCS INFO V-16-1-0 (lpdmc1p) SicapApplication:???:???:Starting resource smppc-app-res 2016/04/05 05:58:21 VCS INFO V-16-1-0 (lpdmc1p) SicapApplication:???:???:Resource smppc-app-res is started 2016/04/05 05:58:34 VCS INFO V-16-1-10298 Resource smppc-app-res (Owner: Unspecified, Group: smppc-rg) is online on lpdmc1p (VCS initiated) 2016/04/05 05:58:34 VCS NOTICE V-16-1-10447 Group smppc-rg is online on system lpdmc1p 2016/04/05 05:59:52 VCS INFO V-16-1-50135 User root fired command: hagrp -online -any mmsoap-rg localclus from localhost 2016/04/05 05:59:52 VCS NOTICE V-16-1-10301 Initiating Online of Resource mmsoap-lh-res (Owner: Unspecified, Group: mmsoap-rg) on System lpdmc1p 2016/04/05 05:59:52 VCS NOTICE V-16-1-10301 Initiating Online of Resource mmsoap-vg-res (Owner: Unspecified, Group: mmsoap-rg) on System lpdmc1p 2016/04/05 05:59:52 VCS NOTICE V-16-1-10301 Initiating Online of Resource opt-mmsoap-mnt-res (Owner: Unspecified, Group: mmsoap-rg) on System lpdmc1p 2016/04/05 05:59:52 VCS NOTICE V-16-1-10301 Initiating Online of Resource var-opt-mmsoap-mnt-res (Owner: Unspecified, Group: mmsoap-rg) on System lpdmc1p 2016/04/05 05:59:53 VCS ERROR V-16-10031-14001 (lpdmc1p) LVMVolumeGroup:mmsoap-vg-res:online:Activation of volume group failed. 2016/04/05 05:59:53 VCS INFO V-16-1-10298 Resource opt-mmsoap-mnt-res (Owner: Unspecified, Group: mmsoap-rg) is online on lpdmc1p (VCS initiated) 2016/04/05 05:59:53 VCS INFO V-16-1-10298 Resource var-opt-mmsoap-mnt-res (Owner: Unspecified, Group: mmsoap-rg) is online on lpdmc1p (VCS initiated) 2016/04/05 05:59:54 VCS INFO V-16-1-10298 Resource mmsoap-vg-res (Owner: Unspecified, Group: mmsoap-rg) is online on lpdmc1p (VCS initiated) 2016/04/05 06:00:02 VCS INFO V-16-1-10298 Resource mmsoap-lh-res (Owner: Unspecified, Group: mmsoap-rg) is online on lpdmc1p (VCS initiated) 2016/04/05 06:00:02 VCS NOTICE V-16-1-10301 Initiating Online of Resource mmsoap-app-res (Owner: Unspecified, Group: mmsoap-rg) on System lpdmc1p 2016/04/05 06:00:02 VCS INFO V-16-1-0 (lpdmc1p) SicapApplication:???:???:Running preonline for resource mmsoap-app-res 2016/04/05 06:00:03 VCS INFO V-16-1-0 (lpdmc1p) SicapApplication:???:???:Preonline for resource mmsoap-app-res finished 2016/04/05 06:00:03 VCS INFO V-16-1-0 (lpdmc1p) SicapApplication:???:???:Starting resource mmsoap-app-res 2016/04/05 06:00:09 VCS INFO V-16-1-0 (lpdmc1p) SicapApplication:???:???:Resource mmsoap-app-res is started 2016/04/05 06:00:21 VCS INFO V-16-1-10298 Resource mmsoap-app-res (Owner: Unspecified, Group: mmsoap-rg) is online on lpdmc1p (VCS initiated) 2016/04/05 06:00:21 VCS NOTICE V-16-1-10447 Group mmsoap-rg is online on system lpdmc1p What can I do to have this service started automatically in such a case when the HB is lost and the nodes are rebooted due to fencing panic? Thank you in advance, Laszlo1.1KViews0likes1CommentCluster node not starting as one node is down.
Hi Experts, Need help in understanding cluster of 1+1 , my master node is down due an OS issue and now the slave node should ideal takeover but i see it is not starting. Attached is the Engine.log where i can see some "ShutdownTimeout" , please let me know how to increase it.( I am not able to use hastatus it says veritas is not started, so i cannot execute hares to increate the shutdowntimeout). 1. In a 1+1 cluster if master node is down , i need slave to run as normal. what should i do for this ? 2. how to set ShutdownTimeout ? 3. Does we have any bugs in 5.1 SP1RP4 ?Solved1.8KViews0likes3CommentsWhy the service group can't be token online automatically after fixing brain-split
The title of this thread is changed from "what's means of "restart mode"" to "Why the service group can't be token online automatically after fixing brain-split". ------------------------------------------------ We are running VCS 6.0.2 on RHEL 6.5,below is our cluster configuration: Heartbeat link: eth3, eth4 Low-priority heartbeat link: not be enabled Fencing: not be enabled Cluster contains 2 servers: jarry-crf1, jarry-crf2. Server groups: TestGrp1 contains a "FileOnOff" resource and Parallel mode is enabled. TestGrp2 contains a "FileOnOff" resource, depends on TestGrp1 and Failover mode is enabled. Test steps: 1. Take TestGrp1 online on both server, take TestGrp2 online on server "jarry-crf2" 2. Stop both heartbeat links on server "jarry-crf1" by command "ifdown eth3; sleep 60; ifdown eth4" 3. Recover heartbeat links by command "ifup eth3; ifup eth4" Then we found the "had" process is restared on server "jarry-crf2", and we found below logs in engine_A.log 2015/02/10 00:48:43 VCS NOTICE V-16-1-10433 Group TestGrp2 will not start automatically on System jarry-crf2 as the system is in restart mode. 2015/02/10 00:48:43 VCS NOTICE V-16-1-10433 Group TestGrp1 will not start automatically on System jarry-crf2 as the system is in restart mode. 2015/02/10 00:48:43 VCS NOTICE V-16-1-10445 Group TestGrp1 will not start automatically as atleast one system in the SystemList attribute of the group is in restart mode. 2015/02/10 00:48:47 VCS NOTICE V-16-1-10433 Group VCShmg will not start automatically on System jarry-crf2 as the system is in restart mode. 2015/02/10 00:48:47 VCS NOTICE V-16-1-10445 Group VCShmg will not start automatically as atleast one system in the SystemList attribute of the group is in restart mode. My Questions: 1. Why "had" process is restarted after heartbeat being recovered. 2. What's means of "restart mode", how to bring service group leave "restart mode" and start automatically. Thanks in advance!Solved3.6KViews1like7CommentsNeed to Migrate Heartbeat Connections to Different Network Switch
Hi all, I'm in the midst of planning a heartbeat switch migration and will need to iron out a few kinks. My environment consists of multiple VCS cluster groups, running on Windows 2003/2008. Apparently one of the network switch connecting all the heartbeats in need of replacement(it is still online, it just needs to be replaced due to end of support). My headache is on how to perform the migration without interrupting the clustered applications. Each of the cluster nodes are running on 2-3HB adapters, so initially I thought that replacing them one-by-one will work flawlessly. But when I tested it on a test environment, apparently after I removed and reconnect a HB connection fromeach pair of nodes, the nodes' state still remain as "Jeopardy" even after reconnection of HB cables. Do you have any idea on what's the best method to migrate all HB connections to a different switch online? What I've did during testing : 1. Freeze all service groups 2. Unplug connections for HB1 for both NodeA and NodeB from oldSwitch and connect them to newSwitch. Both NodeA and NodeB turns to "Jeopardy" at this stage and didn't return to normal state even after reconnection to new switch. 3. I didn't proceed to perform the same on HB2 connections for both node due to my concern with that "Jeopardy" state, afraid that if I were to proceed, it will make two mini-clusters instead. 4. What I did to return both node from Jeopardy state is to run Cluster Configuration Wizard on NodeA and reconfigure LLT connection to reset communication between the HB adapters. Appreciate any advise. Thanks in advance.509Views0likes0CommentsUnable to bring the Service Group online.
Hi All, I tried to bring a SG online in a node but it's not comming online. Let me explaing the issue. We did reboot of a node aixprd001 and we found that /etc/filesystem is corrupted so the SG bosinit_SG is in partial state since lot of cluster FS in not mounted. Then we corrected the entry and done the manual mout of all the FS but the SG still show the status partial so we did the bellow command. hagrp -clear bosinit_SG -all Once done the SG is in online state. For safer side we tried to offline the SG and brought it up online again but the SG failed to come online, Bellow is the only error we able find the engine_A.log file. 2014/12/17 06:49:04 VCS NOTICE V-16-1-10166 Initiating manual online of group bosinit_SG on system aixprd001 2014/12/17 06:49:04 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group bosinit_SG on all nodes Please help me by providing suggestion, I will provide the output of logs if needed. Thanks, RufusSolved2.1KViews0likes4CommentsSG is not switching to next node.
Hi All, I am new to VCS but good in HACMP. In our environment we are using VCS-6.0, I one server we found that the SG is not moving from one node to another node when we tried manual failover using the bellow command. hagrp -switch <SGnamg> -to <sysname> We able to see that the SG is offline in the currnent node but it's not coming online in the secondary node. There is no error locked in engine_A.log except the bellow entry cpus load more than 60% <Secondary node name> Can anyone help me to find the solution for this. I will provide the output of any commands if you need more info to help me out to get this trouble shooted :) Thanks,Solved1.8KViews1like8CommentsHow to delete non-existant system from the Cluster "-i"
Hell Expert, I am facing one simple but bit tricky issue in VCS. one non-existant system added in the cluster which has hostname "-i". i am not aware how this system added in the list. find bellow system list. bash-3.00# hasys -list -i MMINDIA01 MMINDIA02 MMINDIA03 MMINDIA04 i tried with "hasys -delete -i", "hasys -delete "/i" but no success. Kindly help on priority. Regards, AMit MAneSolved3.1KViews0likes17CommentsVCS ERROR V-16-1-10600 Cannot connect to VCS engine
Ihave installed Symantec Storage Foundation v.4.0 for Linux, Veritas Cluster Servises 4.1 with oracle agents on Red Hat Enterprise Linux 4 update 8. I have installed two nodes in the cluster. Everything worked perfectly. There has been a power failure. After that one node has changed status to UNKNOWN. In log file: II tried to implement the recommendations inhttp://www.symantec.com/business/support/index?page=content&id=TECH54873 It did not help. What to do to startanother node (erpnode2)?Solved7.6KViews0likes4CommentsNeed Active/Active solution
Dear Gurus, My client planning to move some HA products. But they still not sure which one to choose. The requirement is HA should be Active- Active setup in duel DC. They ready to buy some products, I recommend SF cluster but not sure which one is best. Kindly suggest me which combination suits with below products. Available DC Setup : 1 Active DC ,1 Active near DC and 1 DR location Products in hand: 1) Oracle & IBM hardware 2) storage (3 * 20K vmak) 3) EMC vplex 4) DB - Oracle RAC (11gR2) Now which symantec HA clustering topology & method can help to achieve 99.99%. To make active-active in HA any minimum perquisites need to maintain from Network, SAN & storage. How about IBM P780 , EMC vplex with SFRAC for Active - Active- Standby ? Thanks, Jay2.3KViews1like10CommentsApplication Agent hang causes no-brain situation
Wondering if anyone has seen this before, what the cause may be, if there is an automated recovery scenario. Situation: We run a 3-node cluster (with a 3-node GCO cluster at a remote site) running VCS 5.1-SP1 on Dell R411 servers. This past Saturday, our operations were performing a standard switchover of our Primary resources (applications) to a Standby node. On switchover to the new node, the IP resource (which is first in the dependency tree) was started up. VCS then reported it was starting up the first of our seven Application resources but none were started up. [As an aside, this node had run the Application resources within the past 3 weeks and they are currently running on that node, so there was no problem with the applications]. It appeared that the Application Agent was hung, as we could interact with the had daemon for stats and some commanding, but hastop commands (or variants) would not complete (i.e., had to CTRL-C them since they would not finish). This left us in a no-brain situation. There were no log entries or traps indicating the had daemon was having a problem with the Application Agent. Worse, the had daemon did not try to recover from the no-brain situation, at least for the 15 minutes we tried CLI commands to clear the issue. We eventually were able to recover from the no-brain by rebooting the server where the issue was occurring. We have a 24x7 operation and outages over 4 minutes can be very detrimental to our customers. How do we know it was an Application Agent hang? We have been able to create the same situation in our lab by attaching to one or more of the Application Agent threads and causing them to halt on a Standby node, then switching over to that node. The Application resources are not started and the had daemon does not try to recover from the situation (or if it does, it says it is restarting the Application Agent then says it is already up), basically leaving us in no-brain. Also, we are migrating to VCS 6.0.1 in the next month and we see the same behavior with that release. Has anyone seen this before? Is it a known VCS bug? Is there some way to automatically recover from this to keep us out of extended no-brain?544Views0likes2Comments