02-07-2014 04:23 AM
Hi
We have an EMC Datadomain DD640 appliance which has run out of disk space. It appears that Netbackup has not been expiring images correctly, as there are currently 1700 backup images on the data domain storage unit, but a Netbackup "images on disk" report for that storage unit reports that there should only be 233 backup images.
I'm attemping to import the additional images from the data domain storage unit back into the Netbackup catalog, so that I might be able to expire the images again manually to see if that will clear some space for us. However, whenever I attempt to do this I get the following error from the import job.
Import phase 1 started 07/02/2014 12:04:59
12:05:00 INF - Create DB information for path @aaaa9.
12:05:01 INF - Initiation of bpdm process to phase 1 import path @aaaa9 was successful.
12:05:14 INF - socket read failed: errno = 10054 - An existing connection was forcibly closed by the remote host.
12:05:14 INF - Status = file read failed.
12:05:14 INF - Consult the Activity Monitor for more information about this job.
---------------
The activity log entry for this job says:
07/02/2014 12:04:59 - begin Import
07/02/2014 12:05:00 - requesting resource @aaaa9
07/02/2014 12:05:00 - granted resource MediaID=@aaaa9;DiskVolume=ESSEX-su1;DiskPool=DDE01_su1;Path=ESSEX-su1;StorageServer=DDE01.mydomain;MediaServer=SRVBCK02.mydomain
07/02/2014 12:05:02 - Info bpdm(pid=8500) started
07/02/2014 12:05:02 - started process bpdm (8500)
07/02/2014 12:05:14 - Error bpimport(pid=45424) socket read failed: errno = 10054 - An existing connection was forcibly closed by the remote host.
07/02/2014 12:05:14 - Error bpimport(pid=45424) Status = file read failed.
07/02/2014 12:05:14 - end Import; elapsed time: 00:00:15
file read failed(13)
Doe anyone have any ideas what might be causing this error to occur?
Thanks
Jim
Solved! Go to Solution.
02-07-2014 05:44 AM
its a expected exprience with the Data domain..
1) Too many cleaning on DD is not recommented in DD as per Data domain support
2) the data blocks in the DD may have the pointers from the Valid retenctions which will not clean the Data blocks
DD will never clean the Data block until the laster pointer to that block got expired....
i can understand the running out of Space in the DD is big pain... it will not give you the space recalim as expected...due to the Deduplication it does.
02-07-2014 05:20 AM
VR46,
if you are trying to get the space in the DD, that is little bit differnt when compare with the Basic disk..
there is no import required in your sitaution..
once the netbackup marked the images as expired.. DD will keep track of those data and expire them in its own cleaning cycle..
each data domian has the cleanig cycle, at that time only it reclime the space on the DD
generally DD have the cleaning frequency of one week, you can check the cleaning frequency with the below command in DD
filesys clean show schedule
you can start the manual clean with the below command if you want
filesys clean start
check the status of the clean
filesys clean watch
----
Do remember.. if you clear the 10GB of the images in netbackup, there is no gareenty you will get the 10 GB free space in the DD. and this is because of the deduplication of DD.
you may get the 9 GB or also may be less than a GB.
02-07-2014 05:35 AM
Thanks for your reply Nagalla.
I should point out that there have been numerous cleaning cycles run between the images expiring in Netbackup and me checking disk space in DD. I would have thought that if the images are no longer in the catalog and a cleaning cycle has run, then the files would no longer be listed as existing on the Data domain appliance.
Either Netbackup is not doing its expiring properly, or I am missing something in my understanding of how it works.
Thanks
02-07-2014 05:44 AM
its a expected exprience with the Data domain..
1) Too many cleaning on DD is not recommented in DD as per Data domain support
2) the data blocks in the DD may have the pointers from the Valid retenctions which will not clean the Data blocks
DD will never clean the Data block until the laster pointer to that block got expired....
i can understand the running out of Space in the DD is big pain... it will not give you the space recalim as expected...due to the Deduplication it does.
02-10-2014 02:00 AM
Thanks Nagalla
You have enlightened me somewhat, and I think I now understand what is happening. And as you say, it is expected behaviour.
Even though Netbackup has expired those images from the catalog, DD still needs them if I am to restore from one of the subsequent images that rely on data contained in the expired images.
I've had a support case open with both EMC and Symantec over the past week and neither of the engineers i've dealt with have offered this as an explanation as it why this is occurrs, which is a suprise, especially from the EMC side as I'd expect them to have a little idea as to how their product functions.
Thanks again for your help.
Jim