Forum Discussion

vlaho's avatar
vlaho
Level 2
10 years ago

Data storage(PureDisk) usage 0% after RAID battery replaced in 5220

RAID battery was replaced by NCR techs, and recalibartion finished, battery looks good:

from:

/opt/MegaRAID/MegaCli/MegaCli64 -adpbbucmd -getbbustatus -a0

>>

BBU status for Adapter: 0

BatteryType: iBBU
Voltage: 4058 mV
Current: 0 mA
Temperature: 40 C

BBU Firmware Status:

  Charging Status              : None
  Voltage                      : OK
  Temperature                  : OK
  Learn Cycle Requested        : No
  Learn Cycle Active           : No
  Learn Cycle Status           : OK
  Learn Cycle Timeout          : No
  I2c Errors Detected          : No
  Battery Pack Missing         : No
  Battery Replacement required : No
  Remaining Capacity Low       : No
  Periodic Learn Required      : No
  Transparent Learn            : No

Battery state:

GasGuageStatus:
  Fully Discharged        : No
  Fully Charged           : Yes
  Discharging             : Yes
  Initialized             : Yes
  Remaining Time Alarm    : No
  Remaining Capacity Alarm: No
  Discharge Terminated    : No
  Over Temperature        : No
  Charging Terminated     : No
  Over Charged            : No

Relative State of Charge: 97 %
Charger System State: 49168
Charger System Ctrl: 0
Charging current: 0 mA
Absolute state of charge: 99 %
Max Error: 2 %

Exit Code: 0x00

<<<


 # crcontrol --dsstat

************ Data Store statistics ************
Data storage      Raw    Size   Used   Avail  Use%
                 110.9T 106.5T   0.0M 106.5T   0%

Number of containers             : 443160
Average container size           : 176032214 bytes (167.88MB)
Space allocated for containers   : 78010436372950 bytes (70.95TB)
Reserved space                   : 4879383494656 bytes (4.44TB)
Reserved space percentage        : 4.0%

Backups, replications are running just fine, queue info is good.

Would reboot of appliance help ?

... Thanks

 

 

  • I think you didn't get a quick response because there wasn't much to go on - i.e. no appliance logs (e.g. storaged, spoold), no output from any of the crcontrol query commands, no other description of symptoms, no nbdevquery of storage server and pool and volume, version not specified.  You found the MegaCli64 command to check BBU, but did you also use MegaCli64 to check parity group, disks, adapters - and what about checking VxVM too?  Do come back with more info, and someone might try to help more quickly next time.

  • Doesn't look good.  It shouldn't need a reboot, as you've just cold booted it following a BBU replacement.  If it's still showing as zero, might be best to place a call with support.

    What's ths current status?

    What version of appliance?  You might be on an older version which used to take some time performing checks before the 'disk pool' would come online and 'up', and then be ready for backups/restores.

  • still the same:

    ************ Data Store statistics ************
    Data storage      Raw    Size   Used   Avail  Use%
                     110.9T 106.5T   0.0M 106.5T   0%

    Number of containers             : 441103
    Average container size           : 174696724 bytes (166.60MB)
    Space allocated for containers   : 77059249256732 bytes (70.08TB)
    Reserved space                   : 4879383494656 bytes (4.44TB)
    Reserved space percentage        : 4.0%

    I was hoping that 'forum' might have a 'quicker' response, I'll open call with support see what they have to say.

    thx

     

  • I think you didn't get a quick response because there wasn't much to go on - i.e. no appliance logs (e.g. storaged, spoold), no output from any of the crcontrol query commands, no other description of symptoms, no nbdevquery of storage server and pool and volume, version not specified.  You found the MegaCli64 command to check BBU, but did you also use MegaCli64 to check parity group, disks, adapters - and what about checking VxVM too?  Do come back with more info, and someone might try to help more quickly next time.

  • It was a corrapted vxfs file_system, unmounted, and run a full fsck.

    rebooted appliance after and we are back into a normal usage numbers.

  • Glad you got it fixed.

    Are you able/willing to show the commands and output used to demonstrate/detect that it was a corrupted VxFS file system?

    Can I ask, how did you go about validating that the higher layer constructs of application folder structures and files and data (i.e. MSDP) had themselves not been corrupted by the VxFS corruption?

    Thanks.

  • 1. manage>storage>scan

    Storage> scan
    - [Info] The scan operation can take up to 15 minutes to complete.
    - [Info] Refreshing the storage devices...
    - [Info] Succeeded.

    2. monitor

    Storage> monitor
    - [Info] Performing sanity check on disks and partitions... (5 mins approx)
    - [Warning] The 'MSDP' partition '0' is not mounted...
    - [Info] Mounting the 'MSDP' partition '0'...
    - [Error] Failed to mount the 'MSDP' partition '0'. A full file system check (fsck) needs to be performed on this partition.

    3.comment out entry in fstab for 'pdvol', reboot make sure that 'pdvol' is not mounted, run:

    fsck -t vxfs /dev/vx/dsk/nbuapp/pdvol –y

    fsck from util-linux-ng 2.16
    pass0 - checking structural files
    pass1 - checking inode sanity and blocks
    pass2 - checking directory linkage
    pass3 - checking reference counts
    pass4 - checking resource maps
    au 184321 summary incorrect - fix? y

    ...

    4. rebooted appliance (mounted pdvol after it corrected itself) -  didn't have to but just want to be clean

    ************ Data Store statistics ************
    Data storage      Raw    Size   Used   Avail  Use%
                     110.9T 106.5T  70.3T  36.2T  66%

    Number of containers             : 437510
    Average container size           : 175936248 bytes (167.79MB)
    Space allocated for containers   : 76973868126586 bytes (70.01TB)
    Reserved space                   : 4879383494656 bytes (4.44TB)
    Reserved space percentage        : 4.0%

    <all good>