Hi,
In addition to what was explained above I'll give you an example using mount. Please remember that the cluster is programmed to perform certain tasks. Those tasks are to online, monitor, and offline resources.
To online a resource (mount) the cluster has been "taught" (scripted in the agent) to online a mount using the mount command + attributes that we place in the configuration (main.cf). The cluster uses this knowledge and information to online the mount resource "mount -t VXFS /dev/vx/dsk/oradg/oravo/ /oradata"
Great, the resource is online now.
Say some other admin goes and unmounts (from CLI and not using VCS) the mount. The clusters other task is to monitor the mount resource. If it goes down it should (depeding on restart limits and tolerance levels) either try and mount it again, or failover to the other node (taking offline the remaining resources in a controlled manner).
So now we'll get to your questions about clearing faults.
Suppose that admin that unmounted the FS also goes and deletes the folder used by the mount. If the cluster sees it, it would maybe try and mount the FS again (once again depeding on restart limits and tolerance levels) but will fail because there is no mount point to folder the FS on.
Your resource is now faulted. Do you think that clearing the fault automatically will resolve the problem here? No, of course you dont because you know there is no way that the mount will happen with out the folder being recreated. And that cannot happen until an Admin (You) goes and invesgitates and resolves the issue.
The cluster works within the boundaries of what the agent is programmed to do (online, clean, monitor, and offline). It cannot be smart and go and troubleshoot issues like this.
Now, to get to the clean action. Suppose in the example described above, there is another mount configured in your cluster, this one was not unmounted by the admin. During the failover process the cluster has to unmount that resource but some user is logged in and currently working in the folder. If the cluster tries to unmount the folder it will not work due to the user being in the folder. So now a more drastic approach needs to be followed, the clean action will be called. The clean in this case would perform a force unmount (and maybe an fuser too) to kick that user out and unmout the mount point so the service group can failover. Clean action is basically the "calling in the big guns" to get the job done , and to get the failover moving.
Hope that makes sense :)