cancel
Showing results for 
Search instead for 
Did you mean: 

AUTOMATE MYSQL DUMP BACKUP‏

Anurag_Sharma1
Level 3

Hi All,

Here's my situation.

MASTER/MEDIA : Solaris 10

Client : Linux/Solaris


There is a backup script created by mysql admin lets call it backup.sh  which can do on demand  mysql dump on a directory

/mysql_backup/yyyymmdd. It mean every time you run the script it will create a folder with that date.

 

I have been told to backup these dump files as standard backup.

I need following accomplished.

1. Need to call backup.sh to perform mysql dump and crate dump in /mysql_backup/yyyyymmdd
2. Run netbackup backup to backup the dump.
3. Once the netbackup backup is successful delete the dump from disk.


What would be the best way to do it via netbackup ? Do I have to use bpstart and bpend scripts on each client ?


Please guide.

Anurag Sharma

1 ACCEPTED SOLUTION

Accepted Solutions

Marianne
Level 6
Partner    VIP    Accredited Certified

If time to perform SQL dump cannot be predicted, bpstart_notify may not be the best way, as the bpstart_start timeout will cause backup failure if dump does not complete in time.

Rather use cron on the client to schedule the script.
Add bpbackup command to end of script to send dump-file to NBU.
(/usr/openv/netbackup/bin/bpbackup -p <policy-name> -s <user-sched-name> /path_to_dumpfile )

Create a policy for this client with a User type schedule and open window during which backup will be allowed.

Alternatively, use the NBU agent that can be obtained from Zmanda. 
See Backing up the Dolphins! 

View solution in original post

7 REPLIES 7

Will_Restore
Level 6

Here's the standard means to backup mysql database

http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html

 

You can run  mysqldump > /mysql_backup/  from bpstart. 

No need for bpend that I can see. 

Marianne
Level 6
Partner    VIP    Accredited Certified

If time to perform SQL dump cannot be predicted, bpstart_notify may not be the best way, as the bpstart_start timeout will cause backup failure if dump does not complete in time.

Rather use cron on the client to schedule the script.
Add bpbackup command to end of script to send dump-file to NBU.
(/usr/openv/netbackup/bin/bpbackup -p <policy-name> -s <user-sched-name> /path_to_dumpfile )

Create a policy for this client with a User type schedule and open window during which backup will be allowed.

Alternatively, use the NBU agent that can be obtained from Zmanda. 
See Backing up the Dolphins! 

Anurag_Sharma1
Level 3

Marianne,

They don't wana buys Zmanda.

The soloution you gave is excellent however it does not address the challange of deleting the older backups.

Shall I add rm -r in the begining of the backup script. Do you think that would be best approach. ?

 

Anurag

Marianne
Level 6
Partner    VIP    Accredited Certified

Disk space management is not the task of the Backup Admin if the DBA insist on doing dumps as opposed to online, hot backups using an agent.
This is something the server owners need to decide.  They usually want to keep the backups on disk for 'fast, local, restores'.

If they have enough space for 2 - 3 days' backups, their dump script can at the start look for older disk files and remove it.

All of our customers using disk dumps understand that space management is between server owner and DBA.

Anurag_Sharma1
Level 3

Marianne,

 

I got the backups working thank you for your idea, however at this time when I run the script the Üser Backup is running 4 to 5 times instead of just once. What could be the reason for that ??

 

here the script I am using .

 

----------------------------------------------------------------------------------------------------------------------------------------

 

#!/bin/bash


#The below command will delete the older backup
rm -r /mysql_backup/201*

 

TIMESTAMP=$(date +"%Y%m%d")
BACKUP_DIR="/mysql_backup/$TIMESTAMP"
MYSQL_USER="mysqlbak"
MYSQL_PASSWORD="Cricket1"
MYSQL=/usr/bin/mysql
# MYSQLDUMP=/usr/bin/mysqldump
MYSQLDUMP=/usr/mysql/5.1/bin/mysqldump
mkdir -p $BACKUP_DIR

databases=`$MYSQL -u$MYSQL_USER -p$MYSQL_PASSWORD -e "SHOW DATABASES;" | egrep -iv "(Database|information_schema)"`

for db in $databases; do
# echo $db
mkdir -p $BACKUP_DIR/$db
BACKUP_DIRR="/mysql_backup/$TIMESTAMP/$db"
$MYSQLDUMP --force --opt --user=$MYSQL_USER -p$MYSQL_PASSWORD --databases $db | gzip > "$BACKUP_DIRR/$db.sql.gz"


#The below command will start NetBackup Tape backups
/usr/openv/netbackup/bin/./bpbackup -w -p SOPMYSQL02 -s sopmysql02-backup /mysql_backup/

 

done

------------------------------------------------------------------------------------------------------------------------

Marianne
Level 6
Partner    VIP    Accredited Certified
The bpbackup is inside the 'for' loop. So, it runs after the dump of each of the databases. Put bpbackup outside the loop - after 'done'.

Anurag_Sharma1
Level 3

Awsome Marianne smiley

Works of genius, I should have checked it indecision

 

Let me check all of them now.

 

Wil be marking the soloution soon. I may need you help further

 

Anurag