Activity

  • John Walker posted an update in the group Group logo of UpdatesUpdates 1 month, 1 week ago

    2019 July 11

    It looks like that "#mysql50#.rocksdb" is something inherited from
    MySQL which flew under the radar and corrupts MariaDB.  See:
        https://bugzilla.redhat.com/show_bug.cgi?id=1530511
    There has been no update to any file in the:
        /server/var/mysql/.rocksdb
    directory since 2019-05-11 and other long-suffering
    administrators report that simply getting rid of this directory
    cures the problems running mysqldump with --all-databases.  I
    shall experiment with this after getting some sleep.
    
    Mother of babbling God: RocksDB was created by Facebook!
        https://en.wikipedia.org/wiki/RocksDB
    What drooling moron allowed it to creep in to an open source
    installation?  I shall expeditiously extirpate it.
    
    Never take your eyes off the clown car.  I was poking around in
    the AWS S3 status page and discovered we had a metric buttload
    of storage billed from the us-east-1 region.  What's that all
    about?  Well, it turns out that when you set up UpdraftPlus and
    configure S3 storage of backups, it creates its storage bucket
    there, not in the region where the host it's backing up
    resides.  Now, I can understand that it makes sense for
    geographic diversification to keep your backups somewhere other
    than the host you're backing up, but it would be nice if they
    gave you a choice instead of just plopping them right down the
    street from the NSA.  I also don't know if sending the data
    between regions runs up data transfer charges which we might
    avoid by keeping the bucket in the same region and relying upon
    S3's built in replication for data integrity.
    
    Then I looked inside the bucket and...surprise, surprise,
    surprise!  UpdraftPlus is configured to keep the four most
    recent backups and that's what it shows you when you look at the
    backup status page.  However, in addition to these backups, the
    ratburger.org.backups bucket contains 55 older backups dating
    from 2018-04-27 through 2019-03-31 and adding up to more than 10
    gigabytes of storage for which we've been paying all this time.
    There is no pattern that I can discern to the dates of the old
    backups it failed to delete: they're basically at random.  I
    used the S3 management console to delete all of these ancient
    backups.
    
    Created a /server/var/backups/database directory for Ratburger
    MySQL database backups.
    
    Created a new CRON job, /server/cron/backupRatburgerDB
    to perform the periodic Ratburger database backup.  This
    is a shell script which uses mysqldump to dump the
    database and xz to compress the dump.  The backup file
    is named from the MySQL database name and the date and
    time of the backup, for example:
        /server/var/backups/database/Ratburger-db_2019-07-11T12:01.sql.xz
    The job obtains the password for the MySQL database from
    /server/cron/my.cnf to avoid passing it on the command line.
    The job automatically purges backups more than 5 days old
    from the backup directory.
    
    Installed rclone in:
        /server/bin/rclone
    in which the latest binary distribution is in:
        rclone-v1.48.0-linux-amd64
    which is linked to /server/bin/rclone/current.
    
    Installed my ~/.config/rclone/rclone.conf file with credentials
    to cloud storage services.
    
    Made a symbolic link:
        ln -s /server/bin/rclone/current/rclone ~/bin/rclone
    
    Created a new S3 bucket, ratburger.org.snapshots, in the
    eu-west-2 (London) region.
    
    Added logic to /server/cron/backupRatburgerDB to rclone sync the
    /server/var/backups/database directory to
    s3:ratburger.org.snapshots/database.  Note that the sync
    operation deletes files in the destination not present in the
    source so files purged locally as too old will be automatically
    removed from the remote archive.
    
    Added a CRON job to run the database backup twice a day:
        30 5,17 * * * /server/cron/backupRatburgerDB
    
    Created a new directory to mirror the uploads directory
    on the server:
        rclone mkdir s3:ratburger.org.snapshots/uploads
    
    Performed an initial sync of the uploads directory to S3:
        rclone sync -v /server/pub/www.ratburger.org/web/wp-content/uploads s3:ratburger.org.snapshots/uploads
    This copied 3.76 Gb of files to the bucket.
    
    Added commands to /server/cron/backupRatburgerDB to rclone sync
    the uploads directory to the remote copy.
    
    The first scheduled run of the CRON job at 17:30 was a complete
    fiasco.  Due to failures to establish the proper environment
    when run from crontab, the mysqldump fell on its face and the
    rclone of the uploads directory failed to run.  I fixed the
    problems in the job and set it to re-run at 18:15.  This time it
    ran successfully.  I changed crontab back to the originally
    scheduled times and will wait to see what happens to-morrow
    morning.