Thank you for the reply. I checked the writer statuses yesterday when I was having all of the trouble. They all showed stable, however one indicated it was 'waiting' on something. When I ran the command again today, everything is Stable and no errors.
My regularly scheduled job failed again last night, however I have run several test jobs this morning successfully. The test incremental jobs appeared to pick up alot of data (roughly one-third of the actual volume). This was rather unexpected, considering we generally only get about 2% of all data on an incremental.
After digging a little deeper into the DFS portion, i found an error on the other DFS member (not being backed up by backup exec) that connection to the host partner was lost, before the Hotifxes were installed to the file server. I don't profess to be an expert on DFS, but it appears that the two were out of sync. The hosting server was re-replicating the volume, and I believe this was causing some trouble with the VSS writer during backupexec backups. Replication takes place over a 100Mbit link, and total data on the volume is around 500GB. So, the reseeding could have been going on all day and night when I was having the trouble. It looks like a really crazy coincidence that DFS flaked out right before a RAWS update.
To help the cause, I have disabled Volume Shadow Copy on the file server for that volume... which should eliminate the VolSnap error. It was set up long long ago before we were ever using DFS or Backup2Disk in symantec. It's just an extra thing that could go wrong... and it really isn't providing enough benefit to keep it on.
I have a Full job scheduled for tonight, and I'm feeling a little better now that I've run a few test job succesfully. At this point, I'm going to let it run tonight and see where I'm at in the morning.
Thanks again for your suggestions.