cancel
Showing results for 
Search instead for 
Did you mean: 

Disk space reporting

I'm runing VCS 5.0 and have a SG with NFS shares off a RAID from the two servers in my cluster. The problem I'm running into is a conflict of how much disk space is being used with a "df" compared to a "du". One of the file systems is off by 36GB. We've unshared the file system, took down the cluster, and tried an fsck, but no errors were reported and no changes to space being reported as a result. Has anyone ever seen this before?

1 Solution

Accepted Solutions
Accepted Solution!

Hi, refer: http://www.syman

Hi,

 

refer:

 

http://www.symantec.com/docs/TECH21312

 

du and df is different .

View solution in original post

4 Replies
Accepted Solution!

Hi, refer: http://www.syman

Hi,

 

refer:

 

http://www.symantec.com/docs/TECH21312

 

du and df is different .

View solution in original post

post the evidence.

post the evidence.

This looks like OS issue, not

This looks like OS issue, not VCS.

If memory serves me right, df reserves about 10% in space calculation. That is why df can report disk usage of more than 100%.

Found this:

http://docs.oracle.com/cd/E23824_01/html/821-1451/spmonitor-6.html 

Use the df command to show the amount of free disk space on each mounted disk. The usable disk space that is reported by df reflects only 90 percent of full capacity, as the reporting statistics allows for 10 percent above the total available space. This head room normally stays empty for better performance.

The percentage of disk space actually reported by the df command is used space divided by usable space.

Thanks you for your

Thanks you for your suggestions everyone, but there is something else that is going on. This has now occurred on three different clusters we have and we’ve verified that there wasn’t a process that still had something open on the file system. In my example file system, it’s only 49 GB and was reporting 36 GB more than what was actually being used (about 1 GB). So that’s 73% of the file system being misreported. I don’t know where the problem resides yet and am still looking for reasons.

Equipment: 2x Solaris 10 boxes running VCS 5.0 with vxfs file systems that resides on a Sun StorEdge 3510 RAID and is part of a SG for NFS shares

Troubleshooting:

  • Ran a fuser on the file system to verify there were no processes with anything still open
  • Un-mounted the share from all systems
  • Completed failovers of the NFS share SG
  • Unshared and un-mounted the file system on the cluster
  • Brought the whole cluster down, and then brought down both servers and re-initialized the cluster
  • Ran an fsck on the file system with no errors reported