cancel
Showing results for 
Search instead for 
Did you mean: 

VMWare backups over the SAN failing with error 23

zach1
Level 3

Hi Team 

We are having trouble configuring backups of our virtual machine over the FC. Here is the setup ..

 

1) Master Server : Suse 12 / NBU Version 8.1

2) Backup host : Netbackup appliance version 3.1 

3) Client : Virtual machines running on VSphere 6.5 , with combination on VMFS 5.0 and 6.0 

4) Virtual machines are hosted on Fujitsu storage.


Netbackup Appliance HBA card FC cables connected to SAN switch and we have zone that netbackup appliance with storage controller port. LUN's hosting the datastores on which these VM resides are mapped to Netbackup appliance. These mapped luns are very much visible in netbackup appliance as /dev/sdr and /dev/sdh. So Raw devices is visible i.e. coming from storage. Kindly check the screenshot attached. 

Now the problem is when we run the backup , it fails with the error "Error opening the snapshot disks using given transport mode: Status 23"

 

Check the VXMS logs , it list the below error.

No path to device LVID:58b6b293-eef4a850-5a3d-54ab3a79c7df/58b6b293-e3df9650-ad01-54ab3a79c7df/1 found


As requested from the "https://www.veritas.com/support/en_US/article.000126663", we have checked the LVID's that Netbackup appliance can see , and above LVID is not listed in the output.

                  
                  bkp-app01:/opt/Symantec/sdcssagent # blkid

                  /dev/mapper/systemlog-log: UUID="5bdc72d0-0698-43b4-8435-1df8a224e7a8" TYPE="ext4"

                  /dev/sdb1: UUID="1yXap0-WPJZ-tXe0-CpYa-NzWg-tnQp-QcvhIU" TYPE="LVM2_member"

                  /dev/sdc1: UUID="ddba98f7-a490-4c71-86ee-093daedf0d6b" TYPE="ext4"

                  /dev/sdc2: UUID="fPuiGM-kSPt-dTKC-dShj-WWBc-SOAS-FxhqdX" TYPE="LVM2_member"

                  /dev/mapper/system-swap: UUID="14a3528d-1839-4d0b-ba2b-8a957c154e7a" TYPE="swap"

                  /dev/mapper/system-root: UUID="68a9a91b-a4fe-411d-a004-0cf8b6eb1093" TYPE="ext4"

                  /dev/sde1: UUID_SUB="59199b64-b42a2d10-3616-54ab3a79c7df" TYPE="VMFS_volume_member"

                  /dev/sdf1: UUID_SUB="5919999e-f6897740-e637-54ab3a79c7df" TYPE="VMFS_volume_member"

                  /dev/sdh1: UUID_SUB="59199b64-b42a2d10-3616-54ab3a79c7df" TYPE="VMFS_volume_member"

                  /dev/sdi1: UUID_SUB="5919999e-f6897740-e637-54ab3a79c7df" TYPE="VMFS_volume_member"

                  /dev/mapper/system-inst-real: UUID="414b6552-efe0-4f23-8685-eb7fd645e2c5" TYPE="ext4"

                  /dev/mapper/system-inst: UUID="414b6552-efe0-4f23-8685-eb7fd645e2c5" TYPE="ext4"

                  /dev/mapper/system-inst_SNAP_FACTORY-cow: TYPE="DM_snapshot_cow"

                  /dev/mapper/system-inst_SNAP_FACTORY: UUID="414b6552-efe0-4f23-8685-eb7fd645e2c5" TYPE="ext4"

                  /dev/mapper/system-rep: UUID="aac26806-2450-44ab-9d69-976f8b69615c" TYPE="ext3"

                  /dev/vx/dmp/disk_0s1: UUID="1yXap0-WPJZ-tXe0-CpYa-NzWg-tnQp-QcvhIU" TYPE="LVM2_member"

                  /dev/vx/dmp/disk_1s1: UUID="ddba98f7-a490-4c71-86ee-093daedf0d6b" TYPE="ext4"

                  /dev/vx/dmp/disk_1s2: UUID="fPuiGM-kSPt-dTKC-dShj-WWBc-SOAS-FxhqdX" TYPE="LVM2_member"

                  /dev/vx/dmp/fj_dxm0_001bs1: UUID_SUB="5919999e-f6897740-e637-54ab3a79c7df" TYPE="VMFS_volume_member"

                  /dev/vx/dmp/fj_dxm0_001fs1: UUID_SUB="59199b64-b42a2d10-3616-54ab3a79c7df" TYPE="VMFS_volume_member"


Now , we know its something to do with the LUN mapping, but storage team is insisting that since both the disks ( sdh and sdi ) are seen by the OS ( as seen in the screenshot ) , its not something from the storage side.

Can anyone provide more insight on this. 

What is the difference between fdisk and blkid command, why the LUN's are not visible when blkid is run but the disk is seen using fdisk command, what else can be done ... I have asked storage team to recreate the zones and do the mapping again though I am not sure if that will help.

Any help is much appreciated.

 

 





.


 

 

2 REPLIES 2

Arturo_Lopez
Level 5
Partner Accredited

Hi...

We always have a similar problem when we presented a new lun or ldev in the appliances and its that the appliance don't see the new device. Prove in the CLI of appliance Manage>FibreChannel>Show and Scan commands. If don't see the luns prove to reboot the appliance.

We always need to reboot the appliance. If this not works so speak with de storage administrator.

Thanks Arturo 

Indeed it was a problem from the storage side. After extensively working with the VMWare and storage team and comparing different id's and UUID's, we were able to get it work. Bottomline is that correct LUN's were not mapped to the netbackup appliance.

Storage team had created one LUN for each datastore, but does not follow the correct naming convention. For eg, LUN named 'PROD" was mapped to datastore name "DEV" and vice versa and hence the confusion.