Forum Discussion

menhoy's avatar
14 years ago

Faint on caculating total files of a big VxFS filesystem,any good idea?

There is a VxFS with about 140,000,000 files.

Run in Suse 10 sp3.

 

First I use "df -i",I got the inodes used is about 140,000,000.

Then I delete about 40,000,000 files,and "df -i" again,but got about 190,000,000 inodes used.

After five days,"df -i" still got about 140,000,000 inodes used and never reduce any more.

 

There are two problems:

1,The inodes used not means the total files,is there any way to get the total files exactly and quickly?

2,The "df -i" has a problem just mentioned above.Is it a problem of VxFS or Suse linux 10 sp3?

 

Thanks for you patients to read above!

Thanks a lot for those who give me an answer.Welcome to China and you will deserved good Chinese meal.

  • VxFS allocated inodes dynamically. Thus, the question df -i asks ("How many inodes are free?") is not really applicable*. Are you trying to find the number of free inodes? or are you trying to find the number of files which exist, as your comments suggest? More generally, what are you trying to accomplish?

     

    *Note, if you don't have largefiles option set either a. when running mkfs b. via fsadm post-mkfs, then you are limited to, I believe ~8 million inodes). largefiles has been the default for a while now, so this shouldn't be an issue for most systems. It is clearly not a concern for this file system, as you already have more than 8 million files.

  • VxFS allocated inodes dynamically. Thus, the question df -i asks ("How many inodes are free?") is not really applicable*. Are you trying to find the number of free inodes? or are you trying to find the number of files which exist, as your comments suggest? More generally, what are you trying to accomplish?

     

    *Note, if you don't have largefiles option set either a. when running mkfs b. via fsadm post-mkfs, then you are limited to, I believe ~8 million inodes). largefiles has been the default for a while now, so this shouldn't be an issue for most systems. It is clearly not a concern for this file system, as you already have more than 8 million files.

  • Menhoy,

    Can you put together a shell command (like 'ls -alR') to do this?  Try this on a VxFS file system with a few thousand files, to see how long it takes......

    When I tried this, I noticed that I would need to exclude the lines that (1) started with "total", (2) ended with 'space-dot', (3) ended with 'space-dot-dot', (4) included the slash character, and (5) were blank lines. 

    $ cd /opt/app     (directory just above your mount point)

    $ ls -alR oracle | grep -v "^total" | grep -v " .$" | grep -v " ..$" | grep -v "\/" | grep -v "^$" > /var/tmp/zzfilecount

    oracle is the name of the directory at the top level of my filesystem (mount point).  Run this as a test on a file system with a few thousand files, then look through the output file, to see if the command accurately removed all the lines which you don't want to include in the count.  Then run it again, but use the "wc -l" command to count the lines generated.

    $ ls -alR oracle | grep -v "^total" | grep -v " .$" | grep -v " ..$" | grep -v "\/" | grep -v "^$" | wc -l

    Try it on a test box.  Start with file systems having smaller numbers of files, and work up to file systems with larger numbers of files, to see how long it takes, and to see if the command becomes a resource hog.