10-26-2011 07:56 AM
when you delete a file from a directory database (Oracle,TimesTen) created with veritas, does not release the space:
example:
[root@ixtrtc11scb ~]# ls -lrt /TimesTen/TransactionLogs/*
/TimesTen/TransactionLogs/lost+found:
total
0
/dev/vx/dsk/ttdg/TimesTen_TransactionLogs
68157440 48041974 18858384 72% /TimesTen/TransactionLogs
10-26-2011 03:54 PM
Is the platform linux? If so, you can try to flush the file system buffer cache:
#>blockdev --flushbufs /dev/vx/dsk/ttdg/TimesTen_TransactionLogs
If it is Solaris, you can try unmounting a remounting the FS.
Let us know if that solves the issue.
Joe D
10-26-2011 10:23 PM
In addition, what scale you are looking this into ? Are you looking this into from Array prospective ?
I would suggest to have a look for Thin Reclaimation, have a look at link below:
https://www-secure.symantec.com/connect/articles/automating-thin-storage-reclamation-veritas-storage-foundation
Gaurav
10-28-2011 05:12 AM
moved to Storage Foundation forum since the problem is related to SF components, not SFHA management products.
Regarding space not being reclaimed - were you trying to delete log files to reclaim space? If the files that were removed were being held open/being written to by an active process, this will not clear the space until the process(es) are stopped/restarted.
To check if a file is in use by a process prior to deleting it, see fuser. If the file is in use, either stop the process before deleting the file, or if you just want to zero the file, cat /dev/null to the file if you need to clear the contents.
10-28-2011 11:58 AM
I agree with Grace. I have seen something similar at a customer not so long ago.
Log partition on a NetBackup master server filled up. customer deleted all the logs, but space was not released. I asked him to stop NBU. My plan was to unmount, and fsck the filesystem.
The moment NBU went down, the space was released without further intervention.
11-17-2011 01:01 PM
We're seeing it at my site as well.
RHEL 5.7 with SF 5.1SP1
We were getting ready to do a SAP system copy and blew out all of the oracle filesystems (1TB+) and when we ran a df -ah it was still showing the filesystems as having data in them. If you went into the filesystems they would show nothing there. I figured VxVM had to do some clean up in the back ground and let it set over night. Came back the next morning and it was still reporting the same. At this point I rebooted the servers and when they came back up df was reporting the correct usage.
I verified that nothing was being left open with fuser/lsof and that the oracle database was shutdown. I'll try Joe D's suggestion with "blockdev --flushbufs" and see if that works.
11-17-2011 03:37 PM
11-19-2011 08:38 AM
You will see this as a process has a file in that filesystem locked, you may be able to unmount it and remount it but if that does not work (or if that causes the application using that filesystem) a reboot will clear it up.