ServiceNow MySQL Database Restore
1. Infrastructure
GitHub repo: hmcts/servicenow-infrastructure
Pipeline: hmcts.servicenow-infrastructure
2. Azure Resources
- Resource Group:
rg-servicenow-prod
- Storage Account:
servicenowprodukssa
- MySQL Flexible Server:
servicenow-mysql-prod
3. Storage Account Setup and Database Download
3.1 SFTP User Creation
First, create a new SFTP user on the Storage Account servicenowprodukssa
as this is not done through automation.
3.2 SSH Key Generation and Configuration
To mount using SFTP authentication on the bastion-prod
VM, generate an SSH key pair on the bastion VM itself:
ssh-keygen -t rsa -b 4096 -f ~/.ssh/azsftp_id_rsa
This stores the private key at the specified path. Associate the public key with the SFTP user created on the storage account.
3.3 Mounting the Storage Account
After logging into the bastion-prod
VM as root user:
- Create a new directory:
mkdir -p ~/azsftp2
- Mount the Storage Account using the private key:
sshfs \
"servicenowprodukssa.mysql-backup.mysql@servicenowprodukssa.blob.core.windows.net:/" \
/root/azsftp2 \
-o port=22 \
-o IdentityFile=~/.ssh/azsftp_id_rsa \
-o reconnect \
-o ServerAliveInterval=15 \
-o ServerAliveCountMax=5 \
-o TCPKeepAlive=yes
- Verify if the SFTP mount is active:
mount | grep sshfs
- Test the mount by creating directories and files:
cd ~/azsftp2
mkdir test
cd test
vi test.txt
Verify that these files appear in the Storage Account container.
3.4 Downloading the MySQL Database
3.4.1 Connect to ServiceNow SFTP
Receive the private key from the ServiceNow team via email and decrypt it using the password they provided.
Connect to ServiceNow SFTP:
sftp -i private_key mojcppprod_sftpmig@sftp-lhr.service-now.com
- Enter the password when prompted (same password used to decrypt the private key).
3.4.2 Download Database Files
- After connecting to ServiceNow SFTP, verify the uploaded files:
sftp> ls -l
-rw-r-xr-- 1 0 0 527356255387 Sep 9 15:02 CHG66945130_mojcppprod_1.sql.gz
-rw-r-xr-- 1 0 0 520794 Sep 9 14:41 CHG66945130_mojcppprod_1_schema.sql.gz
- Set the mounted location as the download destination:
sftp> lcd /root/azsftp2
- Download the database file:
sftp> reget CHG66945130_mojcppprod_1.sql.gz
Note: The download process took several hours as the file is approximately 492 GB in size.
4. MySQL Server Setup
Connection details:
- Host:
10.144.36.11
- Port:
3306
Admin username: sqladmin
ServiceNow database name: servicenow
5. ServiceNow Database Restore
Create and attach a 2-4TB data disk to the Bastion Prod VM.
Create a new disk partition, format and mount it:
# List the SCSI buses ls /sys/class/scsi_host/ # run for all buses and replace X with the host number to scan for new disks echo "- - -" > /sys/class/scsi_host/hostX/scan # Locate the new disk lsblk # Create a new partition, replace the X with the letter of the new disk parted /dev/sdX parted> mklabel gpt parted> mkpart primary 0% 100% parted> quit # Format the new partition mkfs.ext4 /dev/sdX1 # Create a mount point and mount the new partition mkdir /mnt/servicenowrestore mount /dev/sdX1 /mnt/servicenowrestore chmod 700 /mnt/servicenowrestore
Download the MySQL backup file from the Storage Account
We use azCopy as it is much faster than using the SFTP mount.
# Replace SAS_TOKEN with the actual SAS token generated from the Azure portal azcopy copy 'https://servicenowprodukssa.blob.core.windows.net/mysql-backup/CHG66945130_mojcppprod_1.sql.gz?SAS_TOKEN' '/mnt/servicenowrestore/CHG66945130_mojcppprod_1.sql.gz'
Split the SQL file into individual tables for parallel import
In
/mnt/servicenowrestore
create split_by_table.awk with the following content:BEGIN { header = "/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;\n" header = header "/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;\n" header = header "/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;\n" header = header "/*!40101 SET NAMES utf8mb4 */;\n" header = header "/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;\n" header = header "/*!40103 SET TIME_ZONE=\"+00:00\" */;\n" header = header "/*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */;\n" header = header "/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE=\"NO_AUTO_VALUE_ON_ZERO\" */;\n" header = header "/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;\n\n" } /^-- Dumping data for table / { if (out) close(out); counter++; out = counter ".sql"; printf "%s", header > out; } /^-- Dumping data for table / || out { print > out }
Use zcat or alternatively rapidgzip (faster) and the above AWK script. We use mawk as it is faster than gawk.
Note: this will take two or more hours to complete.
cd /mnt/servicenowrestore mkdir CHG66945130_mojcppprod_1 cd CHG66945130_mojcppprod_1 pv ../CHG66945130_mojcppprod_1.sql.gz | zcat | mawk -f ../split_by_table-faster.awk
Temporarily scale up the MySQL Flexible Server
Scale up the MySQL Flexible Server to at least vCore=8 and Storage IOPS=12k for faster import.
Create the ServiceNow database
- Go to Azure Portal and find the MySQL Flexible Server.
- Go to Settings > Databases > Add database.
- Enter
servicenow
as the database name. - Select
utf8mb4
as the character set. - Select
utf8mb4_0900_ai_ci
as the collation.
Import the database schema
Create mysql.cnf in
/mnt/servicenowrestore
with the following content:[client] user=<mysql_admin_username> password=<password> host=<mysql_ip_address> port=<mysql_port>
The schema file has been exported from MariaDB and therefore there are some incompatibilities that need to be resolved before it can be imported into Azure MySQL instance.
First, the per table encryption in Azure MySQL is not supported so it needs to be removed from the schema file.
sed 's/ `ENCRYPTED`=YES//g' CHG66945130_mojcppprod_1_schema.sql > CHG66945130_mojcppprod_1_schema.mysql.sql
Another issue is that MariaDB supports more indexes per table than MySQL. MySQL allows a maximum of 64 indexes.
The following tables needs some indexes removed from the schema file (total index count in brackets):
- cmdb (69 indexes)
- cmdb$par1 (65 indexes)
- task (68 indexes)
The last issue is the fact that MariaDB supports larger inline columns of type VARCHAR. Change the ROW_FORMAT to DYNAMIC for the following tables. - cmdb - cmdb$par1
Now you can import the schema into MySQL:
pv CHG66945130_mojcppprod_1_schema.mysql.sql | mysql --defaults-extra-file=/mnt/servicenowrestore/mysql.cnf servicenow
Further data optimisation
The table
sys_object_source
has a lot of data and it will take too long to import. For tables like this, it is better to convert the SQL file into a CSV file and useLOAD DATA LOCAL INFILE
command to import the data. This is way more efficient.You will need to convert the SQL files for this table into chunks of CSV files. We used chunks of 500k rows per CSV file. Load the chunks one by one as parallel runs lock the table and causes transaction timeout errors.
Each CSV will need respective SQL file that will look something like this:
SET unique_checks = 0; SET foreign_key_checks = 0; SET sql_log_bin = 0; LOAD DATA LOCAL INFILE '/mnt/servicenowrestore/csv_files/sys_object_source_part_1.csv' INTO TABLE sys_object_source FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n';
Then command like that can be used to load all the CSV chunks:
ls csv_files/*.sql | parallel --joblog parallel_5775.log -j1 "mysql --defaults-extra-file=/mnt/servicenowrestore/mysql.cnf --local-infile=1 servicenow < {}"
You can monitor the progress by looking at the parallel.log file.
Import the database data into MySQL in parallel
Run the import command using GNU parallel with 16 parallel jobs:
Note: This will take a couple of days for MySQL instance: vcore=8 and storage IOPS=12k.
cd /mnt/servicenowrestore ls CHG66945130_mojcppprod_1/*.sql | parallel --joblog parallel.log -j16 "mysql --defaults-extra-file=/mnt/servicenowrestore/mysql.cnf servicenow < {}"
You can monitor the progress by looking at the parallel.log file. It will take around 18 hours to complete, there are just a few tables that will take more than 10 hours to complete.
Clean up
After the import is complete, remember to:
- Unmount the SFTP mount from the bastion VM.
- Unmount the data disk and detach it from the bastion VM.
- Delete the data disk. (The database dump is still in the Storage Account)
- Scale down the MySQL Flexible Server to its original size.