Activity

  • John Walker posted an update in the group Group logo of UpdatesUpdates 1 month, 1 week ago

    2019 July 10

    Committed the Really Simple SSL version 3.2.3 update (Build
    329).
    
    Verified that the scheduled update to the Podcast RSS feed ran
    successfully.  The WP RSS Aggregator plug-in version 4.14 update
    appears to be all right, so I committed it (Build 330).
    
    Committed the WP External Links plug-in version 2.32 update
    (Build 331).
    
    Routine mail such as the backup completion message from the
    clown car is showing up properly, so I committed the WP Mail
    SMTP plug-in version 1.5.0 update (Build 332).
    
    The AWS CloudWatch dashboard for "Ratburger-CPU-Monitor":
        https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#dashboards:name=Ratburger-CPU-Monitor
    has been broken ever since we migrated from the t2.medium to
    t3.medium instance on 2018-11-12 because the JSON source code
    for the dashboard still specified the instance ID of the old
    t2.medium instance.  I fixed that, and then migrated the
    plot parameters from the Fourmiliab-CPU-Monitor dashboard
    which has more comprehensible axis scaling and legends.
    
    The handling of health check alarms is very confusing.  When you
    create alarms based upon your EBS, EC2, or S3 resources, the
    alarms must be created within the same region as where the
    resource resides.  Hence, the existing alarms for CPU credit
    running low are in the Frankfurt (eu-central-1) region.  Health
    checks, however, are created within Route 53, which has no
    region (note that the region selector in the title bar says
    "Global" when you're in Route 53).  When you create an alarm
    based upon a health check metric, it is created within the N.
    Virginia (us-east-1) region.  Thus, when you display the alarms
    in your home region, you won't see the Route 53 health check
    alarms because they're in N. Virginia.  You have to change to
    that region in order to view or edit them.  Further, the
    CloudWatch Overview console is region-specific, so you have to
    switch to N. Virginia to see the health checks and then back to
    your home region for the rest.  There doesn't appear to be any
    way to create a health check dashboard or alarm in any region
    other than us-east-1 (or if there is, I haven't figured it out).
    
    Created a new ~/bin/Flogtail script which monitors the
    access_log and error_log but ignores accesses with the user
    agent "Amazon-Route53-Health-Check-Service" to avoid the clutter
    they produce in the log.
    
    To back up the complete Ratburger-db database from MySQL use:
        mysqldump -u Ratburger -p"REDACTED" Ratburger-db >rb_backup.sql
    At the present time, this backup (which consists of SQL commands
    to recreate and repopulate the database from scratch), is 131 Mb
    in size.  Compressing with xz reduces its size to just 14 Mb.
    
    To back up the entire MySQL collection of databases (including
    user information, permissions, etc.) you're supposed to use:
        mysqldump -u root -p"REDACTED" --all-databases >backup_all_dbs.sql
    Note that the root password is different than the password for
    the Ratburger user.  However, this fails with:
        mysqldump: Got error: 1102: "Incorrect database name '#mysql50#.rocksdb'" when selecting the database
    There is indeed a database with that name.  So, I suppose that
    if you want to do this, you'll need to explicitly specify all of
    the database names on the command line after a "--databases"
    option.  I have not tried this.
    
    The reason for researching MySQL database backup is to explore
    getting rid of the UpdraftPlus clown car entirely.  Given the
    level of incompetence among its developers, the experience of
    other administrators with their abusive customer support, and
    irritating incessant up-selling, I have next to zero confidence
    that this thing would actually restore the data if it came to
    actually needing it.  We already have nightly Bacula backups of
    the complete site which dump the MySQL database files in binary
    form, so there's no need for the clown car backups to S3 (which
    are far more costly to create and store) and don't back up files
    outside the WordPress tree.  But having an exported, interpreted
    database of the MySQL database gives a warm and fuzzy feeling
    since in the event of its being corrupted, it would be possible
    to edit the dump file by hand, correct the problem, then
    re-import it into MySQL to restore a working database.
    
    My current thinking is to set up a CRON job that runs, say,
    every 12 hours (which is how often the clown car currently backs
    up the database in its own format which is documented only by
    reading the code [if you can call it that]) into a directory in
    /server/var and then let Bacula take care of backing it up
    offsite to the Fourmilab backup server.  The CRON job would
    enforce a retention period on its database backups, say five
    days, after which they would be purged to avoid clogging up the
    RB server's file system. Bacula would, of course, guarantee that
    they were archived forever since we never recycle backup tapes.
    
    This will, of course, require some plumbing and testing, but the
    prospect of banishing the clown car is a powerful incentive.
    
    Published Builds 328-332 on GitHub.
    
    • Thanks very much for keeping Ratburger.org in good health.
      Thanks also for the transparency.
      It is amazing that you can maintain a stable experience for us in the midst of a platform that seems so fragile.
      Good work.