Restore Data from GCS Using BR
This document describes how to restore the TiDB cluster data backed up using TiDB Operator in Kubernetes.
The restore method described in this document is implemented based on CustomResourceDefinition (CRD) in TiDB Operator. For the underlying implementation, BR is used to restore the data. BR stands for Backup & Restore, which is a command-line tool for distributed backup and recovery of the TiDB cluster data.
User Scenarios​
After backing up TiDB cluster data to GCS using BR, if you need to recover the backup SST (key-value pairs) files from GCS to TiDB cluster, you can follow steps in this document to restore the data using BR.
note
- BR is only applicable to TiDB v3.1 or later releases.
- Data restored by BR cannot be replicated to a downstream cluster, because BR directly imports SST files to TiDB and the downstream cluster currently cannot access the upstream SST files.
This document provides an example about how to restore the backup data from the spec.gcs.prefix folder of the spec.gcs.bucket bucket on GCS to the demo2 TiDB cluster in the test2 namespace. The following are the detailed steps.
Step 1: Prepare the restore environment​
Before restoring backup data on GCS to TiDB using BR, take the following steps to prepare the restore environment:
Download backup-rbac.yaml, and execute the following command to create the role-based access control (RBAC) resources in the
test2namespace:kubectl apply -f backup-rbac.yaml -n test2Grant permissions to the remote storage.
Refer to GCS account permissions.
For a TiDB version earlier than v4.0.8, you also need to complete the following preparation steps. For TiDB v4.0.8 or a later version, skip these preparation steps.
Make sure that you have the
SELECTandUPDATEprivileges on themysql.tidbtable of the target database so that theRestoreCR can adjust the GC time before and after the restore.Create the
restore-demo2-tidb-secretsecret to store the root account and password to access the TiDB cluster:kubectl create secret generic restore-demo2-tidb-secret --from-literal=user=root --from-literal=password=<password> --namespace=test2
Step 2: Restore the backup data to a TiDB cluster​
Create the
Restorecustom resource (CR) to restore the specified data to your cluster:kubectl apply -f restore.yamlThe content of
restore.yamlfile is as follows:---
apiVersion: pingcap.com/v1alpha1
kind: Restore
metadata:
name: demo2-restore-gcs
namespace: test2
spec:
# backupType: full
br:
cluster: demo2
clusterNamespace: test2
# logLevel: info
# statusAddr: ${status-addr}
# concurrency: 4
# rateLimit: 0
# checksum: true
# sendCredToTikv: true
# Only needed for TiDB Operator < v1.1.10 or TiDB < v4.0.8
# to:
# host: ${tidb_host}
# port: ${tidb_port}
# user: ${tidb_user}
# secretName: restore-demo2-tidb-secret
gcs:
projectId: ${project_id}
secretName: gcs-secret
bucket: ${bucket}
prefix: ${prefix}
# location: us-east1
# storageClass: STANDARD_IA
# objectAcl: privateWhen configuring
restore.yaml, note the following:- For more information about GCS configuration, refer to GCS fields.
- Some parameters in
.spec.brare optional, such aslogLevel,statusAddr,concurrency,rateLimit,checksum,timeAgo, andsendCredToTikv. For more information about BR configuration, refer to BR fields. - For v4.0.8 or a later version, BR can automatically adjust
tikv_gc_life_time. You do not need to configurespec.tofields in theRestoreCR. - For more information about the
RestoreCR fields, refer to Restore CR fields.
After creating the
RestoreCR, execute the following command to check the restore status:kubectl get rt -n test2 -owide
Troubleshooting​
If you encounter any problem during the restore process, refer to Common Deployment Failures.