diff --git a/en/TOC.md b/en/TOC.md index eb5091e00..6e107065b 100644 --- a/en/TOC.md +++ b/en/TOC.md @@ -28,7 +28,6 @@ - [Deploy TiDB Across Multiple Kubernetes Clusters](deploy-tidb-cluster-across-multiple-kubernetes.md) - [Deploy a Heterogeneous TiDB Cluster](deploy-heterogeneous-tidb-cluster.md) - [Deploy TiCDC](deploy-ticdc.md) - - [Deploy TiDB Binlog](deploy-tidb-binlog.md) - Monitor and Alert - [Deploy Monitoring and Alerts for TiDB](monitor-a-tidb-cluster.md) - [Monitor and Diagnose TiDB Using TiDB Dashboard](access-dashboard.md) @@ -118,8 +117,6 @@ - [Required RBAC Rules](tidb-operator-rbac.md) - Tools - [TiDB Toolkit](tidb-toolkit.md) - - Configure - - [Configure tidb-drainer Chart](configure-tidb-binlog-drainer.md) - [Log Collection](logs-collection.md) - [Monitoring and Alert on Kubernetes](monitor-kubernetes.md) - [PingCAP Clinic Diagnostic Data](clinic-data-collection.md) diff --git a/en/backup-by-ebs-snapshot-across-multiple-kubernetes.md b/en/backup-by-ebs-snapshot-across-multiple-kubernetes.md index a904ff29d..f7eba19aa 100644 --- a/en/backup-by-ebs-snapshot-across-multiple-kubernetes.md +++ b/en/backup-by-ebs-snapshot-across-multiple-kubernetes.md @@ -36,7 +36,7 @@ To initialize the restored volume more efficiently, it is recommended to **separ - For TiKV configuration, do not set `resolved-ts.enable` to `false`, and do not set `raftstore.report-min-resolved-ts-interval` to `"0s"`. Otherwise, it can lead to backup failure. - For PD configuration, do not set `pd-server.min-resolved-ts-persistence-interval` to `"0s"`. Otherwise, it can lead to backup failure. - To use this backup method, the TiDB cluster must be deployed on AWS EC2 and use AWS EBS volumes. -- This backup method is currently not supported for TiFlash, TiCDC, DM, and TiDB Binlog nodes. +- This backup method is currently not supported for TiFlash, TiCDC and DM nodes. > **Note:** > diff --git a/en/backup-to-aws-s3-by-snapshot.md b/en/backup-to-aws-s3-by-snapshot.md index ef50aeca3..83fdba9b0 100644 --- a/en/backup-to-aws-s3-by-snapshot.md +++ b/en/backup-to-aws-s3-by-snapshot.md @@ -30,7 +30,7 @@ If you have any other requirements, select an appropriate backup method based on - For TiKV configuration, do not set [`resolved-ts.enable`](https://docs.pingcap.com/tidb/stable/tikv-configuration-file#enable-2) to `false`, and do not set [`raftstore.report-min-resolved-ts-interval`](https://docs.pingcap.com/tidb/stable/tikv-configuration-file#report-min-resolved-ts-interval-new-in-v600) to `"0s"`. Otherwise, it can lead to backup failure. - For PD configuration, do not set [`pd-server.min-resolved-ts-persistence-interval`](https://docs.pingcap.com/tidb/stable/pd-configuration-file#min-resolved-ts-persistence-interval-new-in-v600) to `"0s"`. Otherwise, it can lead to backup failure. - To use this backup method, the TiDB cluster must be deployed on AWS EKS and uses AWS EBS volumes. -- This backup method is currently not supported for TiFlash, TiCDC, DM, and TiDB Binlog nodes. +- This backup method is currently not supported for TiFlash, TiCDC and DM nodes. > **Note:** > diff --git a/en/configure-a-tidb-cluster.md b/en/configure-a-tidb-cluster.md index 8706b1337..4986f9f07 100644 --- a/en/configure-a-tidb-cluster.md +++ b/en/configure-a-tidb-cluster.md @@ -37,15 +37,15 @@ The cluster name can be configured by changing `metadata.name` in the `TiDBCuste ### Version -Usually, components in a cluster are in the same version. It is recommended to configure `spec..baseImage` and `spec.version`, if you need to configure different versions for different components, you can configure `spec..version`. +Usually, components in a cluster are in the same version. It is recommended to configure `spec..baseImage` and `spec.version`, if you need to configure different versions for different components, you can configure `spec..version`. Here are the formats of the parameters: - `spec.version`: the format is `imageTag`, such as `v8.5.0` -- `spec..baseImage`: the format is `imageName`, such as `pingcap/tidb` +- `spec..baseImage`: the format is `imageName`, such as `pingcap/tidb` -- `spec..version`: the format is `imageTag`, such as `v8.5.0` +- `spec..version`: the format is `imageTag`, such as `v8.5.0` ### Recommended configuration @@ -246,7 +246,7 @@ To mount multiple PVs for PD microservices (taking the `tso` microservice as an ### HostNetwork -For PD, TiKV, TiDB, TiFlash, TiProxy, TiCDC, and Pump, you can configure the Pods to use the host namespace [`HostNetwork`](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy). +For PD, TiKV, TiDB, TiFlash, TiProxy and TiCDC, you can configure the Pods to use the host namespace [`HostNetwork`](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy). To enable `HostNetwork` for all supported components, configure `spec.hostNetwork: true`. diff --git a/en/configure-storage-class.md b/en/configure-storage-class.md index 86e30c923..be73047c6 100644 --- a/en/configure-storage-class.md +++ b/en/configure-storage-class.md @@ -6,7 +6,7 @@ aliases: ['/docs/tidb-in-kubernetes/dev/configure-storage-class/','/docs/dev/tid # Persistent Storage Class Configuration on Kubernetes -TiDB cluster components such as PD, TiKV, TiDB monitoring, TiDB Binlog, and `tidb-backup` require persistent storage for data. To achieve this on Kubernetes, you need to use [PersistentVolume (PV)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). Kubernetes supports different types of [storage classes](https://kubernetes.io/docs/concepts/storage/volumes/), which can be categorized into two main types: +TiDB cluster components such as PD, TiKV, TiDB monitoring, and BR require persistent storage for data. To achieve this on Kubernetes, you need to use [PersistentVolume (PV)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/). Kubernetes supports different types of [storage classes](https://kubernetes.io/docs/concepts/storage/volumes/), which can be categorized into two main types: - Network storage @@ -28,9 +28,9 @@ TiKV uses the Raft protocol to replicate data. When a node fails, PD automatical PD also uses Raft to replicate data. PD is not an I/O-intensive application, but rather a database for storing cluster meta information. Therefore, a local SAS disk or network SSD storage such as EBS General Purpose SSD (gp2) volumes on AWS or SSD persistent disks on Google Cloud can meet the requirements. -To ensure availability, it is recommended to use network storage for components such as TiDB monitoring, TiDB Binlog, and `tidb-backup` because they do not have redundant replicas. TiDB Binlog's Pump and Drainer components are I/O-intensive applications that require low read and write latency, so it is recommended to use high-performance network storage such as EBS Provisioned IOPS SSD (io1) volumes on AWS or SSD persistent disks on Google Cloud. +To ensure availability, it is recommended to use network storage for components such as TiDB monitoring, and BR because they do not have redundant replicas. -When deploying TiDB clusters or `tidb-backup` with TiDB Operator, you can configure the `StorageClass` for the components that require persistent storage via the corresponding `storageClassName` field in the `values.yaml` configuration file. The `StorageClassName` is set to `local-storage` by default. +When deploying TiDB clusters or BR with TiDB Operator, you can configure the `StorageClass` for the components that require persistent storage via the corresponding `storageClassName` field in the `values.yaml` configuration file. The `StorageClassName` is set to `local-storage` by default. ## Network PV configuration @@ -80,12 +80,6 @@ Currently, Kubernetes supports statically allocated local storage. To create a l > > The number of directories you create depends on the planned number of TiDB clusters. Each directory has a corresponding PV created, and each TiDB cluster's monitoring data uses one PV. -- For a disk that stores TiDB Binlog and backup data, follow the [steps](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs) to mount the disk. First, create multiple directories on the disk and bind mount the directories into the `/mnt/backup` directory. - - >**Note:** - > - > The number of directories you create depends on the planned number of TiDB clusters, the number of Pumps in each cluster, and your backup method. Each directory has a corresponding PV created, and each Pump and Drainer use one PV. All [Ad-hoc full backup](backup-to-s3.md#ad-hoc-full-backup-to-s3-compatible-storage) tasks and [scheduled full backup](backup-to-s3.md#scheduled-full-backup-to-s3-compatible-storage) tasks share one PV. - The `/mnt/ssd`, `/mnt/sharedssd`, `/mnt/monitoring`, and `/mnt/backup` directories mentioned above are discovery directories used by local-volume-provisioner. For each subdirectory in the discovery directory, local-volume-provisioner creates a corresponding PV. ### Step 2: Deploy local-volume-provisioner diff --git a/en/configure-tidb-binlog-drainer.md b/en/configure-tidb-binlog-drainer.md deleted file mode 100644 index 7e1344ccd..000000000 --- a/en/configure-tidb-binlog-drainer.md +++ /dev/null @@ -1,53 +0,0 @@ ---- -title: TiDB Binlog Drainer Configurations on Kubernetes -summary: Learn the configurations of a TiDB Binlog Drainer on Kubernetes. -aliases: ['/docs/tidb-in-kubernetes/dev/configure-tidb-binlog-drainer/'] ---- - -# TiDB Binlog Drainer Configurations on Kubernetes - -This document introduces the configuration parameters for a [TiDB Binlog](deploy-tidb-binlog.md) drainer on Kubernetes. - -> **Warning:** -> -> Starting from TiDB v7.5.0, TiDB Binlog replication is deprecated. Starting from v8.3.0, TiDB Binlog is fully deprecated, with removal planned for a future release. For incremental data replication, use [TiCDC](deploy-ticdc.md) instead. For point-in-time recovery (PITR), use PITR. - -## Configuration parameters - -The following table contains all configuration parameters available for the `tidb-drainer` chart. Refer to [Use Helm](tidb-toolkit.md#use-helm) to learn how to configure these parameters. - -| Parameter | Description | Default Value | -| :----- | :---- | :----- | -| `timezone` | Timezone configuration | `UTC` | -| `drainerName` | The name of `Statefulset` | `""` | -| `clusterName` | The name of the source TiDB cluster | `demo` | -| `clusterVersion` | The version of the source TiDB cluster | `v3.0.1` | -| `baseImage` | The base image of TiDB Binlog | `pingcap/tidb-binlog` | -| `imagePullPolicy` | The pulling policy of the image | `IfNotPresent` | -| `logLevel` | The log level of the drainer process | `info` | -| `storageClassName` | `storageClass` used by the drainer. `storageClassName` refers to a type of storage provided by the Kubernetes cluster, which might map to a level of service quality, a backup policy, or to any policy determined by the cluster administrator. Detailed reference: [storage-classes](https://kubernetes.io/docs/concepts/storage/storage-classes) | `local-storage` | -| `storage` | The storage limit of the drainer Pod. Note that you should set a larger size if `db-type` is set to `pb` | `10Gi` | -| `disableDetect` | Determines whether to disable casualty detection | `false` | -| `initialCommitTs` | Used to initialize a checkpoint if the drainer does not have one. The value is a string type, such as `"424364429251444742"` | `"-1"` | -| `tlsCluster.enabled` | Whether or not to enable TLS between clusters | `false` | -| `config` | The configuration file passed to the drainer. Detailed reference: [drainer.toml](https://github.com/pingcap/tidb-binlog/blob/master/cmd/drainer/drainer.toml) | (see below) | -| `resources` | The resource limits and requests of the drainer Pod | `{}` | -| `nodeSelector` | Ensures that the drainer Pod is only scheduled to the node with the specific key-value pair as the label. Detailed reference: [`nodeselector`](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) | `{}` | -| `tolerations` | Applies to drainer Pods, allowing the Pods to be scheduled to the nodes with specified taints. Detailed reference: [taint-and-toleration](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration) | `{}` | -| `affinity` | Defines scheduling policies and preferences of the drainer Pod. Detailed reference: [affinity-and-anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) | `{}` | - -The default value of `config` is: - -```toml -detect-interval = 10 -compressor = "" -[syncer] -worker-count = 16 -disable-dispatch = false -ignore-schemas = "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql" -safe-mode = false -txn-batch = 20 -db-type = "file" -[syncer.to] -dir = "/data/pb" -``` diff --git a/en/deploy-cluster-on-arm64.md b/en/deploy-cluster-on-arm64.md index b594158cb..1acab3b2f 100644 --- a/en/deploy-cluster-on-arm64.md +++ b/en/deploy-cluster-on-arm64.md @@ -52,9 +52,6 @@ Before starting the process, make sure that Kubernetes clusters are deployed on tikv: baseImage: pingcap/tikv-arm64 # ... - pump: - baseImage: pingcap/tidb-binlog-arm64 - # ... ticdc: baseImage: pingcap/ticdc-arm64 # ... diff --git a/en/deploy-failures.md b/en/deploy-failures.md index e5e032c04..7ec8a3a33 100644 --- a/en/deploy-failures.md +++ b/en/deploy-failures.md @@ -37,7 +37,7 @@ kubectl describe restores -n ${namespace} ${restore_name} The Pending state of a Pod is usually caused by conditions of insufficient resources, for example: -- The `StorageClass` of the PVC used by PD, TiKV, TiFlash, Pump, Monitor, Backup, and Restore Pods does not exist or the PV is insufficient. +- The `StorageClass` of the PVC used by PD, TiKV, TiFlash, Monitor, Backup, and Restore Pods does not exist or the PV is insufficient. - No nodes in the Kubernetes cluster can satisfy the CPU or memory resources requested by the Pod. - The number of TiKV or PD replicas and the number of nodes in the cluster do not satisfy the high availability scheduling policy of tidb-scheduler. - The certificates used by TiDB or TiProxy components are not configured. diff --git a/en/deploy-tidb-binlog.md b/en/deploy-tidb-binlog.md deleted file mode 100644 index e6968e211..000000000 --- a/en/deploy-tidb-binlog.md +++ /dev/null @@ -1,442 +0,0 @@ ---- -title: Deploy TiDB Binlog -summary: Learn how to deploy TiDB Binlog for a TiDB cluster on Kubernetes. -aliases: ['/docs/tidb-in-kubernetes/dev/deploy-tidb-binlog/'] ---- - -# Deploy TiDB Binlog - -This document describes how to maintain [TiDB Binlog](https://docs.pingcap.com/tidb/stable/tidb-binlog-overview) of a TiDB cluster on Kubernetes. - -> **Warning:** -> -> Starting from TiDB v7.5.0, TiDB Binlog replication is deprecated. Starting from v8.3.0, TiDB Binlog is fully deprecated, with removal planned for a future release. For incremental data replication, use [TiCDC](deploy-ticdc.md) instead. For point-in-time recovery (PITR), use PITR. - -## Prerequisites - -- [Deploy TiDB Operator](deploy-tidb-operator.md); -- [Install Helm](tidb-toolkit.md#use-helm) and configure it with the official PingCAP chart. - -## Deploy TiDB Binlog in a TiDB cluster - -TiDB Binlog is disabled in the TiDB cluster by default. To create a TiDB cluster with TiDB Binlog enabled, or enable TiDB Binlog in an existing TiDB cluster, take the following steps. - -### Deploy Pump - -1. Modify the `TidbCluster` CR file to add the Pump configuration. - - For example: - - ```yaml - spec: - ... - pump: - baseImage: pingcap/tidb-binlog - version: v8.1.0 - replicas: 1 - storageClassName: local-storage - requests: - storage: 30Gi - schedulerName: default-scheduler - config: - addr: 0.0.0.0:8250 - gc: 7 - heartbeat-interval: 2 - ``` - - Since v1.1.6, TiDB Operator supports passing raw TOML configuration to the component: - - ```yaml - spec: - ... - pump: - baseImage: pingcap/tidb-binlog - version: v8.1.0 - replicas: 1 - storageClassName: local-storage - requests: - storage: 30Gi - schedulerName: default-scheduler - config: | - addr = "0.0.0.0:8250" - gc = 7 - heartbeat-interval = 2 - ``` - - Edit `version`, `replicas`, `storageClassName`, and `requests.storage` according to your cluster. - -2. Set affinity and anti-affinity for TiDB and Pump. - - If you enable TiDB Binlog in the production environment, it is recommended to set affinity and anti-affinity for TiDB and the Pump component; if you enable TiDB Binlog in a test environment on the internal network, you can skip this step. - - By default, the affinity of TiDB and Pump is set to `{}`. Currently, each TiDB instance does not have a corresponding Pump instance by default. When TiDB Binlog is enabled, if Pump and TiDB are separately deployed and network isolation occurs, and `ignore-error` is enabled in TiDB components, TiDB loses binlogs. - - In this situation, it is recommended to deploy a TiDB instance and a Pump instance on the same node using the affinity feature, and to split Pump instances on different nodes using the anti-affinity feature. For each node, only one Pump instance is required. The steps are as follows: - - * Configure `spec.tidb.affinity` as follows: - - ```yaml - spec: - tidb: - affinity: - podAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: "app.kubernetes.io/component" - operator: In - values: - - "pump" - - key: "app.kubernetes.io/managed-by" - operator: In - values: - - "tidb-operator" - - key: "app.kubernetes.io/name" - operator: In - values: - - "tidb-cluster" - - key: "app.kubernetes.io/instance" - operator: In - values: - - ${cluster_name} - topologyKey: kubernetes.io/hostname - ``` - - * Configure `spec.pump.affinity` as follows: - - ```yaml - spec: - pump: - affinity: - podAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: "app.kubernetes.io/component" - operator: In - values: - - "tidb" - - key: "app.kubernetes.io/managed-by" - operator: In - values: - - "tidb-operator" - - key: "app.kubernetes.io/name" - operator: In - values: - - "tidb-cluster" - - key: "app.kubernetes.io/instance" - operator: In - values: - - ${cluster_name} - topologyKey: kubernetes.io/hostname - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: "app.kubernetes.io/component" - operator: In - values: - - "pump" - - key: "app.kubernetes.io/managed-by" - operator: In - values: - - "tidb-operator" - - key: "app.kubernetes.io/name" - operator: In - values: - - "tidb-cluster" - - key: "app.kubernetes.io/instance" - operator: In - values: - - ${cluster_name} - topologyKey: kubernetes.io/hostname - ``` - - > **Note:** - > - > If you update the affinity configuration of the TiDB components, it will cause rolling updates of the TiDB components in the cluster. - -## Deploy Drainer - -To deploy multiple drainers using the `tidb-drainer` Helm chart for a TiDB cluster, take the following steps: - -1. Make sure that the PingCAP Helm repository is up to date: - - {{< copyable "shell-regular" >}} - - ```shell - helm repo update - ``` - - {{< copyable "shell-regular" >}} - - ```shell - helm search repo tidb-drainer -l - ``` - -2. Get the default `values.yaml` file to facilitate customization: - - {{< copyable "shell-regular" >}} - - ```shell - helm inspect values pingcap/tidb-drainer --version=${chart_version} > values.yaml - ``` - -3. Modify the `values.yaml` file to specify the source TiDB cluster and the downstream database of the drainer. Here is an example: - - ```yaml - clusterName: example-tidb - clusterVersion: v8.1.0 - baseImage:pingcap/tidb-binlog - storageClassName: local-storage - storage: 10Gi - initialCommitTs: "-1" - config: | - detect-interval = 10 - [syncer] - worker-count = 16 - txn-batch = 20 - disable-dispatch = false - ignore-schemas = "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql" - safe-mode = false - db-type = "tidb" - [syncer.to] - host = "downstream-tidb" - user = "root" - password = "" - port = 4000 - ``` - - The `clusterName` and `clusterVersion` must match the desired source TiDB cluster. - - The `initialCommitTs` is the starting commit timestamp of data replication when Drainer has no checkpoint. The value must be set as a string type, such as `"424364429251444742"`. - - For complete configuration details, refer to [TiDB Binlog Drainer Configurations on Kubernetes](configure-tidb-binlog-drainer.md). - -4. Deploy Drainer: - - {{< copyable "shell-regular" >}} - - ```shell - helm install ${release_name} pingcap/tidb-drainer --namespace=${namespace} --version=${chart_version} -f values.yaml - ``` - - If the server does not have an external network, refer to [deploy the TiDB cluster](deploy-on-general-kubernetes.md#deploy-the-tidb-cluster) to download the required Docker image on the machine with an external network and upload it to the server. - - > **Note:** - > - > This chart must be installed to the same namespace as the source TiDB cluster. - -## Enable TLS - -### Enable TLS between TiDB components - -If you want to enable TLS for the TiDB cluster and TiDB Binlog, refer to [Enable TLS between Components](enable-tls-between-components.md). - -After you have created a secret and started a TiDB cluster with Pump, edit the `values.yaml` file to set the `tlsCluster.enabled` value to `true`, and configure the corresponding `certAllowedCN`: - -```yaml -... -tlsCluster: - enabled: true - # certAllowedCN: - # - TiDB -... -``` - -### Enable TLS between Drainer and the downstream database - -If you set the downstream database of `tidb-drainer` to `mysql/tidb`, and if you want to enable TLS between Drainer and the downstream database, take the following steps. - -1. Create a secret that contains the TLS information of the downstream database. - - ```bash - kubectl create secret generic ${downstream_database_secret_name} --namespace=${namespace} --from-file=tls.crt=client.pem --from-file=tls.key=client-key.pem --from-file=ca.crt=ca.pem - ``` - - `tidb-drainer` saves the checkpoint in the downstream database by default, so you only need to configure `tlsSyncer.tlsClientSecretName` and the corresponding `cerAllowedCN`: - - ```yaml - tlsSyncer: - tlsClientSecretName: ${downstream_database_secret_name} - # certAllowedCN: - # - TiDB - ``` - -2. To save the checkpoint of `tidb-drainer` to **other databases that have enabled TLS**, create a secret that contains the TLS information of the checkpoint database: - - ```bash - kubectl create secret generic ${checkpoint_tidb_client_secret} --namespace=${namespace} --from-file=tls.crt=client.pem --from-file=tls.key=client-key.pem --from-file=ca.crt=ca.pem - ``` - - Edit the `values.yaml` file to set the `tlsSyncer.checkpoint.tlsClientSecretName` value to `${checkpoint_tidb_client_secret}`, and configure the corresponding `certAllowedCN`: - - ```yaml - ... - tlsSyncer: {} - tlsClientSecretName: ${downstream_database_secret_name} - # certAllowedCN: - # - TiDB - checkpoint: - tlsClientSecretName: ${checkpoint_tidb_client_secret} - # certAllowedCN: - # - TiDB - ... - ``` - -## Remove Pump/Drainer nodes - -For details on how to maintain the node state of the TiDB Binlog cluster, refer to [Starting and exiting a Pump or Drainer process](https://docs.pingcap.com/tidb/stable/maintain-tidb-binlog-cluster#starting-and-exiting-a-pump-or-drainer-process). - -If you want to remove the TiDB Binlog component completely, it is recommended that you first remove Pump nodes and then remove Drainer nodes. - -If TLS is enabled for the TiDB Binlog component to be removed, write the following content into `binlog.yaml` and execute `kubectl apply -f binlog.yaml` to start a Pod that is mounted with the TLS file and the `binlogctl` tool. - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: binlogctl -spec: - containers: - - name: binlogctl - image: pingcap/tidb-binlog:${tidb_version} - command: ['/bin/sh'] - stdin: true - stdinOnce: true - tty: true - volumeMounts: - - name: binlog-tls - mountPath: /etc/binlog-tls - volumes: - - name: binlog-tls - secret: - secretName: ${cluster_name}-cluster-client-secret -``` - -### Scale in Pump nodes - -1. Scale in Pump Pods: - - {{< copyable "shell-regular" >}} - - ```bash - kubectl patch tc ${cluster_name} -n ${namespace} --type merge -p '{"spec":{"pump":{"replicas": ${pump_replicas}}}}' - ``` - - In the command above, `${pump_replicas}` is the desired number of Pump Pods after the scaling. - - > **Note:** - > - > Do not scale in Pump nodes to 0. Otherwise, [Pump nodes are removed completely](#remove-pump-nodes-completely). - -2. Wait for the Pump Pods to automatically be taken offline and deleted. Run the following command to observe the Pod status: - - {{< copyable "shell-regular" >}} - - ```bash - watch kubectl get po ${cluster_name} -n ${namespace} - ``` - -3. (Optional) Force Pump to go offline: - - If the offline operation fails, that is, the Pump Pods are not deleted for a long time, you can forcibly mark Pump as `offline`. - - - If TLS is not enabled for Pump, mark Pump as `offline`: - - {{< copyable "shell-regular" >}} - - ```shell - kubectl run update-pump-${ordinal_id} --image=pingcap/tidb-binlog:${tidb_version} --namespace=${namespace} --restart=OnFailure -- /binlogctl -pd-urls=http://${cluster_name}-pd:2379 -cmd update-pump -node-id ${cluster_name}-pump-${ordinal_id}:8250 --state offline - ``` - - - If TLS is enabled for Pump, mark Pump as `offline` using the previously started Pod: - - {{< copyable "shell-regular" >}} - - ```shell - kubectl exec binlogctl -n ${namespace} -- /binlogctl -pd-urls=https://${cluster_name}-pd:2379 -cmd update-pump -node-id ${cluster_name}-pump-${ordinal_id}:8250 --state offline -ssl-ca "/etc/binlog-tls/ca.crt" -ssl-cert "/etc/binlog-tls/tls.crt" -ssl-key "/etc/binlog-tls/tls.key" - ``` - -### Remove Pump nodes completely - -> **Note:** -> -> - Before performing the following steps, you need to have at least one Pump node in the cluster. If you have scaled in Pump nodes to `0`, you need to scale out Pump at least to `1` node before you perform the removing operation in this section. -> - To scale out the Pump to `1`, execute `kubectl patch tc ${tidb-cluster} -n ${namespace} --type merge -p '{"spec":{"pump":{"replicas": 1}}}'`. - -1. Before removing Pump nodes, execute `kubectl patch tc ${cluster_name} -n ${namespace} --type merge -p '{"spec":{"tidb":{"binlogEnabled": false}}}'`. After the TiDB Pods are rolling updated, you can remove the Pump nodes. - - If you directly remove Pump nodes, it might cause TiDB failure because TiDB has no Pump nodes to write into. - -2. Refer to [Scale in Pump](#scale-in-pump-nodes) to scale in Pump to `0`. - -3. Execute `kubectl patch tc ${cluster_name} -n ${namespace} --type json -p '[{"op":"remove", "path":"/spec/pump"}]'` to delete all configuration items of `spec.pump`. - -4. Execute `kubectl delete sts ${cluster_name}-pump -n ${namespace}` to delete the StatefulSet resources of Pump. - -5. View PVCs used by the Pump cluster by executing `kubectl get pvc -n ${namespace} -l app.kubernetes.io/component=pump`. Then delete all the PVC resources of Pump by executing `kubectl delete pvc -l app.kubernetes.io/component=pump -n ${namespace}`. - -### Remove Drainer nodes - -1. Take Drainer nodes offline: - - In the following commands, `${drainer_node_id}` is the node ID of the Drainer node to be taken offline. If you have configured `drainerName` in `values.yaml` of Helm, the value of `${drainer_node_id}` is `${drainer_name}-0`; otherwise, the value of `${drainer_node_id}` is `${cluster_name}-${release_name}-drainer-0`. - - - If TLS is not enabled for Drainer, create a Pod to take Drainer offline: - - {{< copyable "shell-regular" >}} - - ```shell - kubectl run offline-drainer-0 --image=pingcap/tidb-binlog:${tidb_version} --namespace=${namespace} --restart=OnFailure -- /binlogctl -pd-urls=http://${cluster_name}-pd:2379 -cmd offline-drainer -node-id ${drainer_node_id}:8249 - ``` - - - If TLS is enabled for Drainer, use the previously started Pod to take Drainer offline: - - {{< copyable "shell-regular" >}} - - ```shell - kubectl exec binlogctl -n ${namespace} -- /binlogctl -pd-urls "https://${cluster_name}-pd:2379" -cmd offline-drainer -node-id ${drainer_node_id}:8249 -ssl-ca "/etc/binlog-tls/ca.crt" -ssl-cert "/etc/binlog-tls/tls.crt" -ssl-key "/etc/binlog-tls/tls.key" - ``` - - View the log of Drainer by executing the following command: - - {{< copyable "shell-regular" >}} - - ```shell - kubectl logs -f -n ${namespace} ${drainer_node_id} - ``` - - If `drainer offline, please delete my pod` is output, this node is successfully taken offline. - -2. Delete the corresponding Drainer Pod: - - Execute `helm uninstall ${release_name} -n ${namespace}` to delete the Drainer Pod. - - If you no longer need Drainer, execute `kubectl delete pvc data-${drainer_node_id} -n ${namespace}` to delete the PVC resources of Drainer. - -3. (Optional) Force Drainer to go offline: - - If the offline operation fails, the Drainer Pod will not output `drainer offline, please delete my pod`. At this time, you can force Drainer to go offline, that is, taking Step 2 to delete the Drainer Pod and mark Drainer as `offline`. - - - If TLS is not enabled for Drainer, mark Drainer as `offline`: - - {{< copyable "shell-regular" >}} - - ```shell - kubectl run update-drainer-${ordinal_id} --image=pingcap/tidb-binlog:${tidb_version} --namespace=${namespace} --restart=OnFailure -- /binlogctl -pd-urls=http://${cluster_name}-pd:2379 -cmd update-drainer -node-id ${drainer_node_id}:8249 --state offline - ``` - - - If TLS is enabled for Drainer, use the previously started Pod to take Drainer offline: - - {{< copyable "shell-regular" >}} - - ```shell - kubectl exec binlogctl -n ${namespace} -- /binlogctl -pd-urls=https://${cluster_name}-pd:2379 -cmd update-drainer -node-id ${drainer_node_id}:8249 --state offline -ssl-ca "/etc/binlog-tls/ca.crt" -ssl-cert "/etc/binlog-tls/tls.crt" -ssl-key "/etc/binlog-tls/tls.key" - ``` diff --git a/en/deploy-tidb-cluster-across-multiple-kubernetes.md b/en/deploy-tidb-cluster-across-multiple-kubernetes.md index 2cb21a6b0..020e8340a 100644 --- a/en/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/en/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -517,9 +517,8 @@ For a TiDB cluster deployed across Kubernetes clusters, to perform a rolling upg 2. If TiProxy is deployed in clusters, upgrade the TiProxy versions for all the Kubernetes clusters that have TiProxy deployed. 3. If TiFlash is deployed in clusters, upgrade the TiFlash versions for all the Kubernetes clusters that have TiFlash deployed. 4. Upgrade TiKV versions for all Kubernetes clusters. - 5. If Pump is deployed in clusters, upgrade the Pump versions for all the Kubernetes clusters that have Pump deployed. - 6. Upgrade TiDB versions for all Kubernetes clusters. - 7. If TiCDC is deployed in clusters, upgrade the TiCDC versions for all the Kubernetes clusters that have TiCDC deployed. + 5. Upgrade TiDB versions for all Kubernetes clusters. + 6. If TiCDC is deployed in clusters, upgrade the TiCDC versions for all the Kubernetes clusters that have TiCDC deployed. ## Exit and reclaim TidbCluster that already join a cross-Kubernetes cluster @@ -527,7 +526,7 @@ When you need to make a cluster exit from the joined TiDB cluster deployed acros - After scaling in the cluster, the number of TiKV replicas in the cluster should be greater than the number of `max-replicas` set in PD. By default, the number of TiKV replicas needs to be greater than three. -Take the second TidbCluster created in [the last section](#step-2-deploy-the-new-tidbcluster-to-join-the-tidb-cluster) as an example. First, set the number of replicas of PD, TiKV, and TiDB to `0`. If you have enabled other components such as TiFlash, TiCDC, TiProxy, and Pump, set the number of these replicas to `0`: +Take the second TidbCluster created in [the last section](#step-2-deploy-the-new-tidbcluster-to-join-the-tidb-cluster) as an example. First, set the number of replicas of PD, TiKV, and TiDB to `0`. If you have enabled other components such as TiFlash, TiCDC and TiProxy, set the number of these replicas to `0`: > **Note:** > diff --git a/en/enable-tls-between-components.md b/en/enable-tls-between-components.md index be783373f..e4ab58d42 100644 --- a/en/enable-tls-between-components.md +++ b/en/enable-tls-between-components.md @@ -12,7 +12,7 @@ To enable TLS between TiDB components, perform the following steps: 1. Generate certificates for each component of the TiDB cluster to be created: - - A set of server-side certificates for the PD/TiKV/TiDB/Pump/Drainer/TiFlash/TiProxy/TiKV Importer/TiDB Lightning component, saved as the Kubernetes Secret objects: `${cluster_name}-${component_name}-cluster-secret`. + - A set of server-side certificates for the PD/TiKV/TiDB/TiFlash/TiProxy/TiDB Lightning component, saved as the Kubernetes Secret objects: `${cluster_name}-${component_name}-cluster-secret`. - A set of shared client-side certificates for the various clients of each component, saved as the Kubernetes Secret objects: `${cluster_name}-cluster-client-secret`. > **Note:** @@ -281,117 +281,6 @@ This section describes how to issue certificates using two methods: `cfssl` and cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal tidb-server.json | cfssljson -bare tidb-server ``` - - Pump - - First, create the default `pump-server.json` file: - - {{< copyable "shell-regular" >}} - - ```shell - cfssl print-defaults csr > pump-server.json - ``` - - Then, edit this file to change the `CN`, `hosts` attributes: - - ``` json - ... - "CN": "TiDB", - "hosts": [ - "127.0.0.1", - "::1", - "*.${cluster_name}-pump", - "*.${cluster_name}-pump.${namespace}", - "*.${cluster_name}-pump.${namespace}.svc" - ], - ... - ``` - - `${cluster_name}` is the name of the cluster. `${namespace}` is the namespace in which the TiDB cluster is deployed. You can also add your customized `hosts`. - - Finally, generate the Pump server-side certificate: - - {{< copyable "shell-regular" >}} - - ```shell - cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal pump-server.json | cfssljson -bare pump-server - ``` - - - Drainer - - First, generate the default `drainer-server.json` file: - - {{< copyable "shell-regular" >}} - - ```shell - cfssl print-defaults csr > drainer-server.json - ``` - - Then, edit this file to change the `CN`, `hosts` attributes: - - ```json - ... - "CN": "TiDB", - "hosts": [ - "127.0.0.1", - "::1", - "" - ], - ... - ``` - - Drainer is deployed using Helm. The `hosts` field varies with different configuration of the `values.yaml` file. - - If you have set the `drainerName` attribute when deploying Drainer as follows: - - ```yaml - ... - # Changes the names of the statefulset and Pod. - # The default value is clusterName-ReleaseName-drainer. - # Does not change the name of an existing running Drainer, which is unsupported. - drainerName: my-drainer - ... - ``` - - Then you can set the `hosts` attribute as described below: - - ```json - ... - "CN": "TiDB", - "hosts": [ - "127.0.0.1", - "::1", - "*.${drainer_name}", - "*.${drainer_name}.${namespace}", - "*.${drainer_name}.${namespace}.svc" - ], - ... - ``` - - If you have not set the `drainerName` attribute when deploying Drainer, configure the `hosts` attribute as follows: - - ```json - ... - "CN": "TiDB", - "hosts": [ - "127.0.0.1", - "::1", - "*.${cluster_name}-${release_name}-drainer", - "*.${cluster_name}-${release_name}-drainer.${namespace}", - "*.${cluster_name}-${release_name}-drainer.${namespace}.svc" - ], - ... - ``` - - `${cluster_name}` is the name of the cluster. `${namespace}` is the namespace in which the TiDB cluster is deployed. `${release_name}` is the `release name` you set when `helm install` is executed. `${drainer_name}` is `drainerName` in the `values.yaml` file. You can also add your customized `hosts`. - - Finally, generate the Drainer server-side certificate: - - {{< copyable "shell-regular" >}} - - ```shell - cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal drainer-server.json | cfssljson -bare drainer-server - ``` - - TiCDC 1. Generate the default `ticdc-server.json` file: @@ -511,47 +400,6 @@ This section describes how to issue certificates using two methods: `cfssl` and cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal tiflash-server.json | cfssljson -bare tiflash-server ``` - - TiKV Importer - - If you need to [restore data using TiDB Lightning](restore-data-using-tidb-lightning.md), you need to generate a server-side certificate for the TiKV Importer component. - - 1. Generate the default `importer-server.json` file: - - {{< copyable "shell-regular" >}} - - ```shell - cfssl print-defaults csr > importer-server.json - ``` - - 2. Edit this file to change the `CN` and `hosts` attributes: - - ```json - ... - "CN": "TiDB", - "hosts": [ - "127.0.0.1", - "::1", - "${cluster_name}-importer", - "${cluster_name}-importer.${namespace}", - "${cluster_name}-importer.${namespace}.svc" - "${cluster_name}-importer.${namespace}.svc", - "*.${cluster_name}-importer", - "*.${cluster_name}-importer.${namespace}", - "*.${cluster_name}-importer.${namespace}.svc" - ], - ... - ``` - - `${cluster_name}` is the name of the cluster. `${namespace}` is the namespace in which the TiDB cluster is deployed. You can also add your customized `hosts`. - - 3. Generate the TiKV Importer server-side certificate: - - {{< copyable "shell-regular" >}} - - ``` shell - cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal importer-server.json | cfssljson -bare importer-server - ``` - - TiDB Lightning If you need to [restore data using TiDB Lightning](restore-data-using-tidb-lightning.md), you need to generate a server-side certificate for the TiDB Lightning component. @@ -642,20 +490,6 @@ This section describes how to issue certificates using two methods: `cfssl` and kubectl create secret generic ${cluster_name}-tidb-cluster-secret --namespace=${namespace} --from-file=tls.crt=tidb-server.pem --from-file=tls.key=tidb-server-key.pem --from-file=ca.crt=ca.pem ``` - - The Pump cluster certificate Secret: - - {{< copyable "shell-regular" >}} - - ```shell - kubectl create secret generic ${cluster_name}-pump-cluster-secret --namespace=${namespace} --from-file=tls.crt=pump-server.pem --from-file=tls.key=pump-server-key.pem --from-file=ca.crt=ca.pem - ``` - - - The Drainer cluster certificate Secret: - - ```shell - kubectl create secret generic ${cluster_name}-drainer-cluster-secret --namespace=${namespace} --from-file=tls.crt=drainer-server.pem --from-file=tls.key=drainer-server-key.pem --from-file=ca.crt=ca.pem - ``` - - The TiCDC cluster certificate Secret: ```shell @@ -674,12 +508,6 @@ This section describes how to issue certificates using two methods: `cfssl` and kubectl create secret generic ${cluster_name}-tiflash-cluster-secret --namespace=${namespace} --from-file=tls.crt=tiflash-server.pem --from-file=tls.key=tiflash-server-key.pem --from-file=ca.crt=ca.pem ``` - - The TiKV Importer cluster certificate Secret: - - ``` shell - kubectl create secret generic ${cluster_name}-importer-cluster-secret --namespace=${namespace} --from-file=tls.crt=importer-server.pem --from-file=tls.key=importer-server-key.pem --from-file=ca.crt=ca.pem - ``` - - The TiDB Lightning cluster certificate Secret: {{< copyable "shell-regular" >}} @@ -698,7 +526,7 @@ This section describes how to issue certificates using two methods: `cfssl` and You have created two Secret objects: - - One Secret object for each PD/TiKV/TiDB/Pump/Drainer server-side certificate to load when the server is started; + - One Secret object for each PD/TiKV/TiDB server-side certificate to load when the server is started; - One Secret object for their clients to connect. ### Using `cert-manager` @@ -960,147 +788,7 @@ This section describes how to issue certificates using two methods: `cfssl` and After the object is created, `cert-manager` generates a `${cluster_name}-tidb-cluster-secret` Secret object to be used by the TiDB component of the TiDB server. - - Pump - - ``` yaml - apiVersion: cert-manager.io/v1 - kind: Certificate - metadata: - name: ${cluster_name}-pump-cluster-secret - namespace: ${namespace} - spec: - secretName: ${cluster_name}-pump-cluster-secret - duration: 8760h # 365d - renewBefore: 360h # 15d - subject: - organizations: - - PingCAP - commonName: "TiDB" - usages: - - server auth - - client auth - dnsNames: - - "*.${cluster_name}-pump" - - "*.${cluster_name}-pump.${namespace}" - - "*.${cluster_name}-pump.${namespace}.svc" - ipAddresses: - - 127.0.0.1 - - ::1 - issuerRef: - name: ${cluster_name}-tidb-issuer - kind: Issuer - group: cert-manager.io - ``` - - `${cluster_name}` is the name of the cluster. Configure the items as follows: - - - Set `spec.secretName` to `${cluster_name}-pump-cluster-secret` - - Add `server auth` and `client auth` in `usages` - - Add the following DNSs in `dnsNames`. You can also add other DNSs according to your needs: - - - `*.${cluster_name}-pump` - - `*.${cluster_name}-pump.${namespace}` - - `*.${cluster_name}-pump.${namespace}.svc` - - - Add the following 2 IPs in `ipAddresses`. You can also add other IPs according to your needs: - - `127.0.0.1` - - `::1` - - Add the Issuer created above in the `issuerRef` - - For other attributes, refer to [cert-manager API](https://cert-manager.io/docs/reference/api-docs/#cert-manager.io/v1.CertificateSpec). - - After the object is created, `cert-manager` generates a `${cluster_name}-pump-cluster-secret` Secret object to be used by the Pump component of the TiDB server. - - - Drainer - - Drainer is deployed using Helm. The `dnsNames` field varies with different configuration of the `values.yaml` file. - - If you set the `drainerName` attributes when deploying Drainer as follows: - - ```yaml - ... - # Changes the name of the statefulset and Pod. - # The default value is clusterName-ReleaseName-drainer - # Does not change the name of an existing running Drainer, which is unsupported. - drainerName: my-drainer - ... - ``` - - Then you need to configure the certificate as described below: - - ``` yaml - apiVersion: cert-manager.io/v1 - kind: Certificate - metadata: - name: ${cluster_name}-drainer-cluster-secret - namespace: ${namespace} - spec: - secretName: ${cluster_name}-drainer-cluster-secret - duration: 8760h # 365d - renewBefore: 360h # 15d - subject: - organizations: - - PingCAP - commonName: "TiDB" - usages: - - server auth - - client auth - dnsNames: - - "*.${drainer_name}" - - "*.${drainer_name}.${namespace}" - - "*.${drainer_name}.${namespace}.svc" - ipAddresses: - - 127.0.0.1 - - ::1 - issuerRef: - name: ${cluster_name}-tidb-issuer - kind: Issuer - group: cert-manager.io - ``` - - If you didn't set the `drainerName` attribute when deploying Drainer, configure the `dnsNames` attributes as follows: - - ``` yaml - apiVersion: cert-manager.io/v1 - kind: Certificate - metadata: - name: ${cluster_name}-drainer-cluster-secret - namespace: ${namespace} - spec: - secretName: ${cluster_name}-drainer-cluster-secret - duration: 8760h # 365d - renewBefore: 360h # 15d - subject: - organizations: - - PingCAP - commonName: "TiDB" - usages: - - server auth - - client auth - dnsNames: - - "*.${cluster_name}-${release_name}-drainer" - - "*.${cluster_name}-${release_name}-drainer.${namespace}" - - "*.${cluster_name}-${release_name}-drainer.${namespace}.svc" - ipAddresses: - - 127.0.0.1 - - ::1 - issuerRef: - name: ${cluster_name}-tidb-issuer - kind: Issuer - group: cert-manager.io - ``` - - `${cluster_name}` is the name of the cluster. `${namespace}` is the namespace in which the TiDB cluster is deployed. `${release_name}` is the `release name` you set when `helm install` is executed. `${drainer_name}` is `drainerName` in the `values.yaml` file. You can also add your customized `dnsNames`. - - - Set `spec.secretName` to `${cluster_name}-drainer-cluster-secret`. - - Add `server auth` and `client auth` in `usages`. - - See the above descriptions for `dnsNames`. - - Add the following 2 IPs in `ipAddresses`. You can also add other IPs according to your needs: - - `127.0.0.1` - - `::1` - - Add the Issuer created above in `issuerRef`. - - For other attributes, refer to [cert-manager API](https://cert-manager.io/docs/reference/api-docs/#cert-manager.io/v1.CertificateSpec). - - After the object is created, `cert-manager` generates a `${cluster_name}-drainer-cluster-secret` Secret object to be used by the Drainer component of the TiDB server. + `${cluster_name}` is the name of the cluster. `${namespace}` is the namespace in which the TiDB cluster is deployed. `${release_name}` is the `release name` you set when `helm install` is executed. You can also add your customized `dnsNames`. - TiCDC @@ -1228,61 +916,6 @@ This section describes how to issue certificates using two methods: `cfssl` and After the object is created, `cert-manager` generates a `${cluster_name}-tiflash-cluster-secret` Secret object to be used by the TiFlash component of the TiDB server. - - TiKV Importer - - If you need to [restore data using TiDB Lightning](restore-data-using-tidb-lightning.md), you need to generate a server-side certificate for the TiKV Importer component. - - ```yaml - apiVersion: cert-manager.io/v1 - kind: Certificate - metadata: - name: ${cluster_name}-importer-cluster-secret - namespace: ${namespace} - spec: - secretName: ${cluster_name}-importer-cluster-secret - duration: 8760h # 365d - renewBefore: 360h # 15d - subject: - organizations: - - PingCAP - commonName: "TiDB" - usages: - - server auth - - client auth - dnsNames: - - "${cluster_name}-importer" - - "${cluster_name}-importer.${namespace}" - - "${cluster_name}-importer.${namespace}.svc" - - "*.${cluster_name}-importer" - - "*.${cluster_name}-importer.${namespace}" - - "*.${cluster_name}-importer.${namespace}.svc" - ipAddresses: - - 127.0.0.1 - - ::1 - issuerRef: - name: ${cluster_name}-tidb-issuer - kind: Issuer - group: cert-manager.io - ``` - - In the file, `${cluster_name}` is the name of the cluster: - - - Set `spec.secretName` to `${cluster_name}-importer-cluster-secret`. - - Add `server auth` and `client auth` in `usages`. - - Add the following DNSs in `dnsNames`. You can also add other DNSs according to your needs: - - - `${cluster_name}-importer` - - `${cluster_name}-importer.${namespace}` - - `${cluster_name}-importer.${namespace}.svc` - - - Add the following 2 IP addresses in `ipAddresses`. You can also add other IP addresses according to your needs: - - `127.0.0.1` - - `::1` - - Add the Issuer created above in `issuerRef`. - - For other attributes, refer to [cert-manager API](https://cert-manager.io/docs/reference/api-docs/#cert-manager.io/v1.CertificateSpec). - - After the object is created, `cert-manager` generates a `${cluster_name}-importer-cluster-secret` Secret object to be used by the TiKV Importer component of the TiDB server. - - TiDB Lightning If you need to [restore data using TiDB Lightning](restore-data-using-tidb-lightning.md), you need to generate a server-side certificate for the TiDB Lightning component. @@ -1382,9 +1015,8 @@ In this step, you need to perform the following operations: - Create a TiDB cluster - Enable TLS between the TiDB components, and enable CN verification - Deploy a monitoring system -- Deploy the Pump component, and enable CN verification -1. Create a TiDB cluster with a monitoring system and the Pump component: +1. Create a TiDB cluster with a monitoring system: Create the `tidb-cluster.yaml` file: @@ -1432,15 +1064,6 @@ In this step, you need to perform the following operations: security: cluster-verify-cn: - TiDB - pump: - baseImage: pingcap/tidb-binlog - replicas: 1 - requests: - storage: "100Gi" - config: - security: - cert-allowed-cn: - - TiDB --- apiVersion: pingcap.com/v1alpha1 kind: TidbMonitor @@ -1499,52 +1122,7 @@ In this step, you need to perform the following operations: > - TiDB > ``` -2. Create a Drainer component and enable TLS and CN verification: - - - Method 1: Set `drainerName` when you create Drainer. - - Edit the `values.yaml` file, set `drainer-name`, and enable the TLS feature: - - ``` yaml - ... - drainerName: ${drainer_name} - tlsCluster: - enabled: true - certAllowedCN: - - TiDB - ... - ``` - - Deploy the Drainer cluster: - - {{< copyable "shell-regular" >}} - - ``` shell - helm install ${release_name} pingcap/tidb-drainer --namespace=${namespace} --version=${helm_version} -f values.yaml - ``` - - - Method 2: Do not set `drainerName` when you create Drainer. - - Edit the `values.yaml` file, and enable the TLS feature: - - ``` yaml - ... - tlsCluster: - enabled: true - certAllowedCN: - - TiDB - ... - ``` - - Deploy the Drainer cluster: - - {{< copyable "shell-regular" >}} - - ``` shell - helm install ${release_name} pingcap/tidb-drainer --namespace=${namespace} --version=${helm_version} -f values.yaml - ``` - -3. Create the Backup/Restore resource object: +2. Create the Backup/Restore resource object: - Create the `backup.yaml` file: diff --git a/en/modify-tidb-configuration.md b/en/modify-tidb-configuration.md index d9c3da80f..0127610b7 100644 --- a/en/modify-tidb-configuration.md +++ b/en/modify-tidb-configuration.md @@ -13,7 +13,7 @@ This document describes how to modify the configuration of TiDB clusters deploye For TiDB and TiKV, if you [modify their configuration online](https://docs.pingcap.com/tidb/stable/dynamic-config/) using SQL statements, after you upgrade or restart the cluster, the configurations will be overwritten by those in the `TidbCluster` CR. This leads to the online configuration update being invalid. Therefore, to persist the configuration, you must directly modify their configurations in the `TidbCluster` CR. -For TiFlash, TiProxy, TiCDC, and Pump, you can only modify their configurations in the `TidbCluster` CR. +For TiFlash, TiProxy and TiCDC, you can only modify their configurations in the `TidbCluster` CR. To modify the configuration in the `TidbCluster` CR, take the following steps: diff --git a/en/releases/release-1.0.4.md b/en/releases/release-1.0.4.md index 97b6b9581..ceb9d69ee 100644 --- a/en/releases/release-1.0.4.md +++ b/en/releases/release-1.0.4.md @@ -31,7 +31,7 @@ There is no action required if you are upgrading from [v1.0.3](release-1.0.3.md) New Helm chart `tidb-lightning` brings [TiDB Lightning](https://docs.pingcap.com/tidb/stable/tidb-lightning-overview) support for TiDB on Kubernetes. Check out the [document](../restore-data-using-tidb-lightning.md) for detailed user guide. -Another new Helm chart `tidb-drainer` brings multiple drainers support for TiDB Binlog on Kubernetes. Check out the [document](../deploy-tidb-binlog.md) for detailed user guide. +Another new Helm chart `tidb-drainer` brings multiple drainers support for TiDB Binlog on Kubernetes. ### Improvements diff --git a/en/restore-from-aws-s3-by-snapshot.md b/en/restore-from-aws-s3-by-snapshot.md index fb7901719..e9cc8bf83 100644 --- a/en/restore-from-aws-s3-by-snapshot.md +++ b/en/restore-from-aws-s3-by-snapshot.md @@ -17,7 +17,7 @@ The restore method described in this document is implemented based on CustomReso - Snapshot restore is applicable to TiDB Operator v1.4.0 or above, and TiDB v6.3.0 or above. - Snapshot restore only supports restoring to a cluster with the same number of TiKV nodes and volumes configuration. That is, the number of TiKV nodes and volume configurations is identical between the restore cluster and backup cluster. -- Snapshot restore is currently not supported for TiFlash, TiCDC, DM, and TiDB Binlog nodes. +- Snapshot restore is currently not supported for TiFlash, TiCDC and DM nodes. - Snapshot restore supports only the default configuration (3000IOPS/125 MB) of GP3. To perform restore using other configurations, you can specify the volume type or configuration, such as `--volume-type=io2`, `--volume-iops=7000`, or `--volume-throughput=400`. ```yaml diff --git a/en/restore-from-ebs-snapshot-across-multiple-kubernetes.md b/en/restore-from-ebs-snapshot-across-multiple-kubernetes.md index 493813029..6242accf6 100644 --- a/en/restore-from-ebs-snapshot-across-multiple-kubernetes.md +++ b/en/restore-from-ebs-snapshot-across-multiple-kubernetes.md @@ -17,7 +17,7 @@ The restore method described in this document is implemented based on CustomReso - Snapshot restore is applicable to TiDB Operator v1.5.1 or later versions and TiDB v6.5.4 or later versions. - You can use snapshot restore only to restore data to a cluster with the same number of TiKV nodes and volumes configuration. That is, the number of TiKV nodes and volume configurations of TiKV nodes are identical between the restore cluster and backup cluster. -- Snapshot restore is currently not supported for TiFlash, TiCDC, DM, and TiDB Binlog nodes. +- Snapshot restore is currently not supported for TiFlash, TiCDC and DM nodes. ## Prerequisites diff --git a/en/restore-from-gcs.md b/en/restore-from-gcs.md index c74862fd6..4b521e60f 100644 --- a/en/restore-from-gcs.md +++ b/en/restore-from-gcs.md @@ -10,7 +10,7 @@ This document describes how to restore the TiDB cluster data backed up using TiD The restore method described in this document is implemented based on CustomResourceDefinition (CRD) in TiDB Operator v1.1 or later versions. For the underlying implementation, [TiDB Lightning TiDB-backend](https://docs.pingcap.com/tidb/stable/tidb-lightning-backends#tidb-lightning-tidb-backend) is used to perform the restore. -TiDB Lightning is a tool used for fast full import of large amounts of data into a TiDB cluster. It reads data from local disks, Google Cloud Storage (GCS) or Amazon S3. TiDB Lightning supports three backends: `Importer-backend`, `Local-backend`, and `TiDB-backend`. In this document, `TiDB-backend` is used. For the differences of these backends and how to choose backends, see [TiDB Lightning Backends](https://docs.pingcap.com/tidb/stable/tidb-lightning-backends). To import data using `Importer-backend` or `Local-backend`, see [Import Data](restore-data-using-tidb-lightning.md). +TiDB Lightning is a tool used for fast full import of large amounts of data into a TiDB cluster. It reads data from local disks, Google Cloud Storage (GCS) or Amazon S3. TiDB Lightning supports two backends: `Local-backend` and `TiDB-backend`. In this document, `TiDB-backend` is used. For the differences of these backends and how to choose backends, see [TiDB Lightning Backends](https://docs.pingcap.com/tidb/stable/tidb-lightning-backends). To import data using `Local-backend`, see [Import Data](restore-data-using-tidb-lightning.md). This document shows an example in which the backup data stored in the specified path on [GCS](https://cloud.google.com/storage/docs/) is restored to the TiDB cluster. diff --git a/en/restore-from-s3.md b/en/restore-from-s3.md index 4321da1d3..de5b483fe 100644 --- a/en/restore-from-s3.md +++ b/en/restore-from-s3.md @@ -10,7 +10,7 @@ This document describes how to restore the TiDB cluster data backed up using TiD The restore method described in this document is implemented based on CustomResourceDefinition (CRD) in TiDB Operator v1.1 or later versions. For the underlying implementation, [TiDB Lightning TiDB-backend](https://docs.pingcap.com/tidb/stable/tidb-lightning-backends#tidb-lightning-tidb-backend) is used to perform the restore. -TiDB Lightning is a tool used for fast full import of large amounts of data into a TiDB cluster. It reads data from local disks, Google Cloud Storage (GCS) or Amazon S3. TiDB Lightning supports three backends: `Importer-backend`, `Local-backend`, and `TiDB-backend`. In this document, `TiDB-backend` is used. For the differences of these backends and how to choose backends, see [TiDB Lightning Backends](https://docs.pingcap.com/tidb/stable/tidb-lightning-backends). To import data using `Importer-backend` or `Local-backend`, see [Import Data](restore-data-using-tidb-lightning.md). +TiDB Lightning is a tool used for fast full import of large amounts of data into a TiDB cluster. It reads data from local disks, Google Cloud Storage (GCS) or Amazon S3. TiDB Lightning supports two backends: `Local-backend` and `TiDB-backend`. In this document, `TiDB-backend` is used. For the differences of these backends and how to choose backends, see [TiDB Lightning Backends](https://docs.pingcap.com/tidb/stable/tidb-lightning-backends). To import data using `Local-backend`, see [Import Data](restore-data-using-tidb-lightning.md). This document shows an example in which the backup data stored in the specified path on the S3-compatible storage is restored to the TiDB cluster. diff --git a/en/suspend-tidb-cluster.md b/en/suspend-tidb-cluster.md index 1ec4bc458..73a30c975 100644 --- a/en/suspend-tidb-cluster.md +++ b/en/suspend-tidb-cluster.md @@ -58,7 +58,6 @@ If you need to suspend the TiDB cluster, take the following steps: * TiFlash * TiCDC * TiKV - * Pump * TiProxy * PD diff --git a/en/tidb-toolkit.md b/en/tidb-toolkit.md index b4c0ba160..2c32a6a76 100644 --- a/en/tidb-toolkit.md +++ b/en/tidb-toolkit.md @@ -177,10 +177,7 @@ version.BuildInfo{Version:"v3.4.1", GitCommit:"c4e74854886b2efe3321e185578e6db9b Kubernetes applications are packed as charts in Helm. PingCAP provides the following Helm charts for TiDB on Kubernetes: * `tidb-operator`: used to deploy TiDB Operator; -* `tidb-cluster`: used to deploy TiDB clusters; -* `tidb-backup`: used to back up or restore TiDB clusters; * `tidb-lightning`: used to import data into a TiDB cluster; -* `tidb-drainer`: used to deploy TiDB Drainer; These charts are hosted in the Helm chart repository `https://charts.pingcap.org/` maintained by PingCAP. You can add this repository to your local server or computer using the following command: @@ -200,9 +197,6 @@ helm search repo pingcap ``` NAME CHART VERSION APP VERSION DESCRIPTION -pingcap/tidb-backup v1.6.1 A Helm chart for TiDB Backup or Restore -pingcap/tidb-cluster v1.6.1 A Helm chart for TiDB Cluster -pingcap/tidb-drainer v1.6.1 A Helm chart for TiDB Binlog drainer. pingcap/tidb-lightning v1.6.1 A Helm chart for TiDB Lightning pingcap/tidb-operator v1.6.1 v1.6.1 tidb-operator Helm chart for Kubernetes ``` @@ -267,7 +261,6 @@ Use the following command to download the chart file required for cluster instal ```shell wget http://charts.pingcap.org/tidb-operator-v1.6.1.tgz -wget http://charts.pingcap.org/tidb-drainer-v1.6.1.tgz wget http://charts.pingcap.org/tidb-lightning-v1.6.1.tgz ``` diff --git a/en/upgrade-a-tidb-cluster.md b/en/upgrade-a-tidb-cluster.md index 737fd729e..2956cdf5c 100644 --- a/en/upgrade-a-tidb-cluster.md +++ b/en/upgrade-a-tidb-cluster.md @@ -54,12 +54,12 @@ During the rolling update, TiDB Operator automatically completes Leader transfer kubectl edit tc ${cluster_name} -n ${namespace} ``` - Usually, all components in a cluster are in the same version. You can upgrade the TiDB cluster simply by modifying `spec.version`. If you need to use different versions for different components, modify `spec..version`. + Usually, all components in a cluster are in the same version. You can upgrade the TiDB cluster simply by modifying `spec.version`. If you need to use different versions for different components, modify `spec..version`. The `version` field has following formats: - `spec.version`: the format is `imageTag`, such as `v8.5.0` - - `spec..version`: the format is `imageTag`, such as `v3.1.0` + - `spec..version`: the format is `imageTag`, such as `v3.1.0` 2. Check the upgrade progress: diff --git a/en/use-auto-failover.md b/en/use-auto-failover.md index b82e4f43c..78636e23c 100644 --- a/en/use-auto-failover.md +++ b/en/use-auto-failover.md @@ -39,7 +39,7 @@ In addition, when configuring a TiDB cluster, you can specify `spec.${component} ## Automatic failover policies -There are six components in a TiDB cluster: PD, TiKV, TiDB, TiFlash, TiCDC, and Pump. Currently, TiCDC and Pump do not support the automatic failover feature. PD, TiKV, TiDB, and TiFlash have different failover policies. This section gives a detailed introduction to these policies. +There are six components in a TiDB cluster: PD, TiKV, TiDB, TiFlash and TiCDC. Currently, TiCDC do not support the automatic failover feature. PD, TiKV, TiDB, and TiFlash have different failover policies. This section gives a detailed introduction to these policies. ### Failover with PD diff --git a/zh/TOC.md b/zh/TOC.md index 28693aa3b..053133dbe 100644 --- a/zh/TOC.md +++ b/zh/TOC.md @@ -28,7 +28,6 @@ - [跨多个 Kubernetes 集群部署 TiDB 集群](deploy-tidb-cluster-across-multiple-kubernetes.md) - [部署 TiDB 异构集群](deploy-heterogeneous-tidb-cluster.md) - [部署增量数据同步工具 TiCDC](deploy-ticdc.md) - - [部署 Binlog 收集工具](deploy-tidb-binlog.md) - 监控与告警 - [部署 TiDB 集群监控与告警](monitor-a-tidb-cluster.md) - [使用 TiDB Dashboard 监控诊断 TiDB 集群](access-dashboard.md) @@ -118,8 +117,6 @@ - [TiDB Operator RBAC 规则](tidb-operator-rbac.md) - 工具 - [TiDB Toolkit](tidb-toolkit.md) - - 配置 - - [tidb-drainer chart 配置](configure-tidb-binlog-drainer.md) - [日志收集](logs-collection.md) - [Kubernetes 监控与告警](monitor-kubernetes.md) - [PingCAP Clinic 数据采集范围说明](clinic-data-collection.md) diff --git a/zh/backup-to-aws-s3-by-snapshot.md b/zh/backup-to-aws-s3-by-snapshot.md index ae5892369..14f8ac4c1 100644 --- a/zh/backup-to-aws-s3-by-snapshot.md +++ b/zh/backup-to-aws-s3-by-snapshot.md @@ -26,7 +26,7 @@ summary: 介绍如何基于 EBS 卷快照使用 TiDB Operator 备份 TiDB 集群 - TiKV 配置中,**不能**将 [`resolved-ts.enable`](https://docs.pingcap.com/zh/tidb/stable/tikv-configuration-file#enable-2) 设置为 `false`,也**不能**将 [`raftstore.report-min-resolved-ts-interval`](https://docs.pingcap.com/zh/tidb/stable/tikv-configuration-file#report-min-resolved-ts-interval-从-v600-版本开始引入) 设置为 `"0s"`,否则会导致备份失败。 - PD 配置中,**不能**将 [`pd-server.min-resolved-ts-persistence-interval`](https://docs.pingcap.com/zh/tidb/stable/pd-configuration-file#min-resolved-ts-persistence-interval-从-v600-版本开始引入) 设置为 `"0s"`,否则会导致备份失败。 - TiDB 集群部署在 EKS 上,且使用了 AWS EBS 卷。 -- 暂不支持 TiFlash、TiCDC、DM 和 TiDB Binlog 相关节点的卷快照备份。 +- 暂不支持 TiFlash、TiCDC 和 DM 相关节点的卷快照备份。 > **注意:** > diff --git a/zh/configure-a-tidb-cluster.md b/zh/configure-a-tidb-cluster.md index 1f907df26..1b9b98f83 100644 --- a/zh/configure-a-tidb-cluster.md +++ b/zh/configure-a-tidb-cluster.md @@ -37,13 +37,13 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/configure-a-tidb-cluster/','/zh/tidb- ### 版本 -正常情况下,集群内的各组件应该使用相同版本,所以一般建议配置 `spec..baseImage` + `spec.version` 即可。如果需要为不同的组件配置不同的版本,则可以配置 `spec..version`。 +正常情况下,集群内的各组件应该使用相同版本,所以一般建议配置 `spec..baseImage` + `spec.version` 即可。如果需要为不同的组件配置不同的版本,则可以配置 `spec..version`。 相关参数的格式如下: - `spec.version`,格式为 `imageTag`,例如 `v8.5.0` -- `spec..baseImage`,格式为 `imageName`,例如 `pingcap/tidb` -- `spec..version`,格式为 `imageTag`,例如 `v8.5.0` +- `spec..baseImage`,格式为 `imageName`,例如 `pingcap/tidb` +- `spec..version`,格式为 `imageTag`,例如 `v8.5.0` ### 推荐配置 @@ -247,7 +247,7 @@ TiDB Operator 支持为 PD、TiDB、TiKV、TiCDC 挂载多块 PV,可以用于 ### HostNetwork -PD、TiKV、TiDB、TiFlash、TiProxy、TiCDC 及 Pump 支持配置 Pod 使用宿主机上的网络命名空间 [`HostNetwork`](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy)。可通过配置 `spec.hostNetwork: true` 为所有受支持的组件开启,或通过为特定组件配置 `hostNetwork: true` 为单个或多个组件开启。 +PD、TiKV、TiDB、TiFlash、TiProxy 及 TiCDC 支持配置 Pod 使用宿主机上的网络命名空间 [`HostNetwork`](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy)。可通过配置 `spec.hostNetwork: true` 为所有受支持的组件开启,或通过为特定组件配置 `hostNetwork: true` 为单个或多个组件开启。 ### Discovery diff --git a/zh/configure-storage-class.md b/zh/configure-storage-class.md index fe95bf879..921b764d3 100644 --- a/zh/configure-storage-class.md +++ b/zh/configure-storage-class.md @@ -6,7 +6,7 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/configure-storage-class/','/docs-cn/d # Kubernetes 上的持久化存储类型配置 -TiDB 集群中 PD、TiKV、监控等组件以及 TiDB Binlog 和备份等工具都需要使用将数据持久化的存储。Kubernetes 上的数据持久化需要使用 [PersistentVolume (PV)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)。Kubernetes 提供多种[存储类型](https://kubernetes.io/docs/concepts/storage/volumes/),主要分为两大类: +TiDB 集群中 PD、TiKV、监控等组件和备份等工具都需要使用将数据持久化的存储。Kubernetes 上的数据持久化需要使用 [PersistentVolume (PV)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)。Kubernetes 提供多种[存储类型](https://kubernetes.io/docs/concepts/storage/volumes/),主要分为两大类: * 网络存储 @@ -28,7 +28,7 @@ TiKV 自身借助 Raft 实现了数据复制,出现节点故障后,PD 会自 PD 同样借助 Raft 实现了数据复制,但作为存储集群元信息的数据库,并不是 IO 密集型应用,所以一般本地普通 SAS 盘或网络 SSD 存储(例如 AWS 上 gp2 类型的 EBS 存储卷,Google Cloud 上的持久化 SSD 盘)就可以满足要求。 -监控组件以及 TiDB Binlog、备份等工具,由于自身没有做多副本冗余,所以为保证可用性,推荐用网络存储。其中 TiDB Binlog 的 pump 和 drainer 组件属于 IO 密集型应用,需要较低的读写延迟,所以推荐用高性能的网络存储(例如 AWS 上的 io1 类型的 EBS 存储卷,Google Cloud 上的持久化 SSD 盘)。 +监控组件以及备份等工具,由于自身没有做多副本冗余,所以为保证可用性,推荐用网络存储。 在利用 TiDB Operator 部署 TiDB 集群或者备份工具的时候,需要持久化存储的组件都可以通过 values.yaml 配置文件中对应的 `storageClassName` 设置存储类型。不设置时默认都使用 `local-storage`。 @@ -86,12 +86,6 @@ Kubernetes 当前支持静态分配的本地存储。可使用 [local-static-pro > > 该步骤中创建的目录个数取决于规划的 TiDB 集群数量。1 个目录会对应创建 1 个 PV。每个 TiDB 集群的监控数据会使用 1 个 PV。 -- 给 TiDB Binlog 和备份数据使用的盘,可以参考[步骤](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md#sharing-a-disk-filesystem-by-multiple-filesystem-pvs)挂载盘,创建目录,并将新建的目录以 bind mount 方式挂载到 `/mnt/backup` 目录下。 - - > **注意:** - > - > 该步骤中创建的目录个数取决于规划的 TiDB 集群数量、每个集群内的 Pump 数量及备份方式。1 个目录会对应创建 1 个 PV。每个 Pump 会使用 1 个 PV,每个 drainer 会使用 1 个 PV,所有 [Ad-hoc 全量备份](backup-to-s3.md#ad-hoc-全量备份)和所有[定时全量备份](backup-to-s3.md#定时全量备份)会共用 1 个 PV。 - 上述的 `/mnt/ssd`、`/mnt/sharedssd`、`/mnt/monitoring` 和 `/mnt/backup` 是 local-volume-provisioner 使用的发现目录(discovery directory),local-volume-provisioner 会为发现目录下的每一个子目录创建对应的 PV。 ### 第 2 步:部署 local-volume-provisioner @@ -128,7 +122,7 @@ Kubernetes 当前支持静态分配的本地存储。可使用 [local-static-pro monitoring-storage: # 给监控数据使用 hostDir: /mnt/monitoring mountDir: /mnt/monitoring - backup-storage: # 给 TiDB Binlog 和备份数据使用 + backup-storage: # 给备份数据使用 hostDir: /mnt/backup mountDir: /mnt/backup ``` diff --git a/zh/configure-tidb-binlog-drainer.md b/zh/configure-tidb-binlog-drainer.md deleted file mode 100644 index bfc127132..000000000 --- a/zh/configure-tidb-binlog-drainer.md +++ /dev/null @@ -1,53 +0,0 @@ ---- -title: Kubernetes 上的 TiDB Binlog Drainer 配置 -summary: 了解 Kubernetes 上的 TiDB Binlog Drainer 配置参数。 -aliases: ['/docs-cn/tidb-in-kubernetes/dev/configure-tidb-binlog-drainer/'] ---- - -# Kubernetes 上的 TiDB Binlog Drainer 配置 - -本文档介绍 Kubernetes 上 [TiDB Binlog](deploy-tidb-binlog.md) drainer 的配置参数。 - -> **警告:** -> -> 从 TiDB v7.5.0 开始,TiDB Binlog 的数据同步功能被废弃。从 v8.3.0 开始,TiDB Binlog 被完全废弃,并计划在未来版本中移除。如需进行增量数据同步,请使用 [TiCDC](deploy-ticdc.md)。如需按时间点恢复,请使用 Point-in-Time Recovery (PITR)。 - -## 配置参数 - -下表包含所有用于 `tidb-drainer` chart 的配置参数。关于如何配置这些参数,可参阅[使用 Helm](tidb-toolkit.md#使用-helm)。 - -| 参数 | 说明 | 默认值 | -| :----- | :---- | :----- | -| `timezone` | 时区配置 | `UTC` | -| `drainerName` | Statefulset 名称 | `""` | -| `clusterName` | 源 TiDB 集群的名称 | `demo` | -| `clusterVersion` | 源 TiDB 集群的版本 | `v3.0.1` | -| `baseImage` | TiDB Binlog 的基础镜像 | `pingcap/tidb-binlog` | -| `imagePullPolicy` | 镜像的拉取策略 | `IfNotPresent` | -| `logLevel` | drainer 进程的日志级别 | `info` | -| `storageClassName` | drainer 所使用的 `storageClass`。`storageClassName` 是 Kubernetes 集群提供的一种存储,可以映射到服务质量级别、备份策略或集群管理员确定的任何策略。详情可参阅 [storage-classes](https://kubernetes.io/docs/concepts/storage/storage-classes) | `local-storage` | -| `storage` | drainer Pod 的存储限制。请注意,如果 `db-type` 设为 `pd`,则应将本参数值设得大一些 | `10Gi` | -| `disableDetect` | 决定是否禁用事故检测 | `false` | -| `initialCommitTs` | 如果 drainer 没有断点,则用于初始化断点。该参数值为 string 类型,如 `"424364429251444742"` | `"-1"` | -| `tlsCluster.enabled` | 是否开启集群间 TLS | `false` | -| `config` | 传递到 drainer 的配置文件。详情可参阅 [drainer.toml](https://github.com/pingcap/tidb-binlog/blob/master/cmd/drainer/drainer.toml) |(见下文)| -| `resources` | drainer Pod 的资源限制和请求 | `{}` | -| `nodeSelector` | 确保 drainer Pod 仅被调度到具有特定键值对作为标签的节点上。详情可参阅 [nodeselector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) | `{}` | -| `tolerations` | 适用于 drainer Pod,允许将 Pod 调度到有指定 taint 的节点上。详情可参阅 [taint-and-toleration](https://kubernetes.io/docs/concepts/configuration/taint-and-toleration) | `{}` | -| `affinity` | 定义 drainer Pod 的调度策略和首选项。详情可参阅 [affinity-and-anti-affinity](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity) | `{}` | - -`config` 的默认值为: - -```toml -detect-interval = 10 -compressor = "" -[syncer] -worker-count = 16 -disable-dispatch = false -ignore-schemas = "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql" -safe-mode = false -txn-batch = 20 -db-type = "file" -[syncer.to] -dir = "/data/pb" -``` diff --git a/zh/deploy-cluster-on-arm64.md b/zh/deploy-cluster-on-arm64.md index 893f24dbf..2c1806c25 100644 --- a/zh/deploy-cluster-on-arm64.md +++ b/zh/deploy-cluster-on-arm64.md @@ -52,9 +52,6 @@ summary: 本文档介绍如何在 ARM64 机器上部署 TiDB 集群 tikv: baseImage: pingcap/tikv-arm64 # ... - pump: - baseImage: pingcap/tidb-binlog-arm64 - # ... ticdc: baseImage: pingcap/ticdc-arm64 # ... diff --git a/zh/deploy-failures.md b/zh/deploy-failures.md index 0465e5e67..057f0b38c 100644 --- a/zh/deploy-failures.md +++ b/zh/deploy-failures.md @@ -38,7 +38,7 @@ kubectl describe restores -n ${namespace} ${restore_name} Pod 处于 Pending 状态,通常都是资源不满足导致的,比如: -* 使用持久化存储的 PD、TiKV、TiFlash、Pump、Monitor、Backup、Restore Pod 使用的 PVC 的 StorageClass 不存在或 PV 不足 +* 使用持久化存储的 PD、TiKV、TiFlash、Monitor、Backup、Restore Pod 使用的 PVC 的 StorageClass 不存在或 PV 不足 * Kubernetes 集群中没有节点能满足 Pod 申请的 CPU 或内存 * PD 或者 TiKV Replicas 数量和集群内节点数量不满足 tidb-scheduler 高可用调度策略 * TiDB、TiProxy 等组件使用的证书没有配置 diff --git a/zh/deploy-tidb-binlog.md b/zh/deploy-tidb-binlog.md deleted file mode 100644 index 6ae58db61..000000000 --- a/zh/deploy-tidb-binlog.md +++ /dev/null @@ -1,427 +0,0 @@ ---- -title: 部署 TiDB Binlog -summary: 了解如何在 Kubernetes 上部署 TiDB 集群的 TiDB Binlog。 -aliases: ['/docs-cn/tidb-in-kubernetes/dev/deploy-tidb-binlog/'] ---- - -# 部署 TiDB Binlog - -本文档介绍如何在 Kubernetes 上部署 TiDB 集群的 [TiDB Binlog](https://docs.pingcap.com/zh/tidb/stable/tidb-binlog-overview)。 - -> **警告:** -> -> 从 TiDB v7.5.0 开始,TiDB Binlog 的数据同步功能被废弃。从 v8.3.0 开始,TiDB Binlog 被完全废弃,并计划在未来版本中移除。如需进行增量数据同步,请使用 [TiCDC](deploy-ticdc.md)。如需按时间点恢复,请使用 Point-in-Time Recovery (PITR)。 - -## 部署准备 - -- [部署 TiDB Operator](deploy-tidb-operator.md); -- [安装 Helm](tidb-toolkit.md#使用-helm) 并配置 PingCAP 官方 chart 仓库。 - -## 部署 TiDB 集群的 TiDB Binlog - -默认情况下,TiDB Binlog 在 TiDB 集群中处于禁用状态。若要创建一个启用 TiDB Binlog 的 TiDB 集群,或在现有 TiDB 集群中启用 TiDB Binlog,可根据以下步骤进行操作。 - -### 部署 Pump - -可以修改 TidbCluster CR,添加 Pump 相关配置,示例如下: - -``` yaml -spec - ... - pump: - baseImage: pingcap/tidb-binlog - version: v8.1.0 - replicas: 1 - storageClassName: local-storage - requests: - storage: 30Gi - schedulerName: default-scheduler - config: - addr: 0.0.0.0:8250 - gc: 7 - heartbeat-interval: 2 -``` - -自 v1.1.6 版本起支持透传 TOML 配置给组件: - -```yaml -spec - ... - pump: - baseImage: pingcap/tidb-binlog - version: v8.1.0 - replicas: 1 - storageClassName: local-storage - requests: - storage: 30Gi - schedulerName: default-scheduler - config: | - addr = "0.0.0.0:8250" - gc = 7 - heartbeat-interval = 2 -``` - -按照集群实际情况修改 `version`、`replicas`、`storageClassName`、`requests.storage` 等配置。 - -如果在生产环境中开启 TiDB Binlog,建议为 TiDB 与 Pump 组件设置亲和性和反亲和性。如果在内网测试环境中尝试使用开启 TiDB Binlog,可以跳过此步。 - -默认情况下,TiDB 和 Pump 的 affinity 亲和性设置为 `{}`。由于目前 Pump 组件与 TiDB 组件默认并非一一对应,当启用 TiDB Binlog 时,如果 Pump 与 TiDB 组件分开部署并出现网络隔离,而且 TiDB 组件还开启了 `ignore-error`,则会导致 TiDB 丢失 Binlog。推荐通过亲和性特性将 TiDB 组件与 Pump 部署在同一台 Node 上,同时通过反亲和性特性将 Pump 分散在不同的 Node 上,每台 Node 上至多仅需一个 Pump 实例。 - -* 将 `spec.tidb.affinity` 按照如下设置: - - ```yaml - spec: - tidb: - affinity: - podAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: "app.kubernetes.io/component" - operator: In - values: - - "pump" - - key: "app.kubernetes.io/managed-by" - operator: In - values: - - "tidb-operator" - - key: "app.kubernetes.io/name" - operator: In - values: - - "tidb-cluster" - - key: "app.kubernetes.io/instance" - operator: In - values: - - ${cluster_name} - topologyKey: kubernetes.io/hostname - ``` - -* 将 `spec.pump.affinity` 按照如下设置: - - ```yaml - spec: - pump: - affinity: - podAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: "app.kubernetes.io/component" - operator: In - values: - - "tidb" - - key: "app.kubernetes.io/managed-by" - operator: In - values: - - "tidb-operator" - - key: "app.kubernetes.io/name" - operator: In - values: - - "tidb-cluster" - - key: "app.kubernetes.io/instance" - operator: In - values: - - ${cluster_name} - topologyKey: kubernetes.io/hostname - podAntiAffinity: - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 100 - podAffinityTerm: - labelSelector: - matchExpressions: - - key: "app.kubernetes.io/component" - operator: In - values: - - "pump" - - key: "app.kubernetes.io/managed-by" - operator: In - values: - - "tidb-operator" - - key: "app.kubernetes.io/name" - operator: In - values: - - "tidb-cluster" - - key: "app.kubernetes.io/instance" - operator: In - values: - - ${cluster_name} - topologyKey: kubernetes.io/hostname - ``` - -> **注意:** -> -> 如果更新了 TiDB 组件的亲和性配置,将引起 TiDB 组件滚动更新。 - -### 部署 Drainer - -可以通过 `tidb-drainer` Helm chart 来为 TiDB 集群部署多个 drainer,示例如下: - -1. 确保 PingCAP Helm 库是最新的: - - {{< copyable "shell-regular" >}} - - ```shell - helm repo update - ``` - - {{< copyable "shell-regular" >}} - - ```shell - helm search repo tidb-drainer -l - ``` - -2. 获取默认的 `values.yaml` 文件以方便自定义: - - {{< copyable "shell-regular" >}} - - ```shell - helm inspect values pingcap/tidb-drainer --version=${chart_version} > values.yaml - ``` - -3. 修改 `values.yaml` 文件以指定源 TiDB 集群和 drainer 的下游数据库。示例如下: - - ```yaml - clusterName: example-tidb - clusterVersion: v8.1.0 - baseImage: pingcap/tidb-binlog - storageClassName: local-storage - storage: 10Gi - initialCommitTs: "-1" - config: | - detect-interval = 10 - [syncer] - worker-count = 16 - txn-batch = 20 - disable-dispatch = false - ignore-schemas = "INFORMATION_SCHEMA,PERFORMANCE_SCHEMA,mysql" - safe-mode = false - db-type = "tidb" - [syncer.to] - host = "downstream-tidb" - user = "root" - password = "" - port = 4000 - ``` - - `clusterName` 和 `clusterVersion` 必须匹配所需的源 TiDB 集群。 - - `initialCommitTs` 为 drainer 没有 checkpoint 时数据同步的起始 commit timestamp。该参数值必须以 string 类型配置,如 `"424364429251444742"`。 - - 有关完整的配置详细信息,请参阅 [Kubernetes 上的 TiDB Binlog Drainer 配置](configure-tidb-binlog-drainer.md)。 - -4. 部署 Drainer: - - {{< copyable "shell-regular" >}} - - ```shell - helm install ${release_name} pingcap/tidb-drainer --namespace=${namespace} --version=${chart_version} -f values.yaml - ``` - - 如果服务器没有外网,请参考 [部署 TiDB 集群](deploy-on-general-kubernetes.md#部署-tidb-集群) 在有外网的机器上将用到的 Docker 镜像下载下来并上传到服务器上。 - - > **注意:** - > - > 该 chart 必须与源 TiDB 集群安装在相同的命名空间中。 - -## 开启 TLS - -### 为 TiDB 组件间开启 TLS - -如果要为 TiDB 集群及 TiDB Binlog 开启 TLS,请参考[为 TiDB 组件间开启 TLS](enable-tls-between-components.md) 进行配置。 - -创建 secret 并启动包含 Pump 的 TiDB 集群后,修改 `values.yaml` 将 `tlsCluster.enabled` 设置为 true,并配置相应的 `certAllowedCN`: - -```yaml -... -tlsCluster: - enabled: true - # certAllowedCN: - # - TiDB -... -``` - -### 为 Drainer 和下游数据库间开启 TLS - -如果 `tidb-drainer` 的写入下游设置为 `mysql/tidb`,并且希望为 `drainer` 和下游数据库间开启 TLS,可以参考下面步骤进行配置。 - -首先我们需要创建一个包含下游数据库 TLS 信息的 secret,创建方式如下: - -```bash -kubectl create secret generic ${downstream_database_secret_name} --namespace=${namespace} --from-file=tls.crt=client.pem --from-file=tls.key=client-key.pem --from-file=ca.crt=ca.pem -``` - -默认情况下,`tidb-drainer` 会将 checkpoint 保存到下游数据库中,所以仅需配置 `tlsSyncer.tlsClientSecretName` 并配置相应的 `certAllowedCN` 即可。 - -```yaml -tlsSyncer: - tlsClientSecretName: ${downstream_database_secret_name} - # certAllowedCN: - # - TiDB -``` - -如果需要将 `tidb-drainer` 的 checkpoint 保存到其他**开启 TLS** 的数据库,需要创建一个包含 checkpoint 数据库的 TLS 信息的 secret,创建方式为: - -```bash -kubectl create secret generic ${checkpoint_tidb_client_secret} --namespace=${namespace} --from-file=tls.crt=client.pem --from-file=tls.key=client-key.pem --from-file=ca.crt=ca.pem -``` - -修改 `values.yaml` 将 `tlsSyncer.checkpoint.tlsClientSecretName` 设置为 `${checkpoint_tidb_client_secret}`,并配置相应的 `certAllowedCN`: - -```yaml -... -tlsSyncer: {} - tlsClientSecretName: ${downstream_database_secret_name} - # certAllowedCN: - # - TiDB - checkpoint: - tlsClientSecretName: ${checkpoint_tidb_client_secret} - # certAllowedCN: - # - TiDB -... -``` - -## 缩容/移除 Pump/Drainer 节点 - -如需详细了解如何维护 TiDB Binlog 集群节点状态信息,可以参考 [Pump/Drainer 的启动、退出流程](https://docs.pingcap.com/zh/tidb/stable/maintain-tidb-binlog-cluster#pumpdrainer-的启动退出流程)。 - -如果需要完整移除 TiDB Binlog 组件,最好是先移除 Pump 节点,再移除 Drainer 节点。 - -如果需要移除的 TiDB Binlog 组件开启了 TLS,则需要先将下述文件写入 `binlog.yaml`,并使用 `kubectl apply -f binlog.yaml` 启动一个挂载了 TLS 文件和 binlogctl 工具的 Pod。 - -```yaml -apiVersion: v1 -kind: Pod -metadata: - name: binlogctl -spec: - containers: - - name: binlogctl - image: pingcap/tidb-binlog:${tidb_version} - command: ['/bin/sh'] - stdin: true - stdinOnce: true - tty: true - volumeMounts: - - name: binlog-tls - mountPath: /etc/binlog-tls - volumes: - - name: binlog-tls - secret: - secretName: ${cluster_name}-cluster-client-secret -``` - -### 缩容 Pump 节点 - -1. 执行以下命令缩容 Pump Pod: - - {{< copyable "shell-regular" >}} - - ```bash - kubectl patch tc ${cluster_name} -n ${namespace} --type merge -p '{"spec":{"pump":{"replicas": ${pump_replicas}}}}' - ``` - - 其中 `${pump_replicas}` 是你想缩容至的目标副本数。 - - > **注意:** - > - > 不要缩容 Pump 到 0,否则 [Pump 节点会被完全移除](#完全移除-pump-节点)。 - -2. 等待 Pump Pod 自动下线被删除,运行以下命令观察: - - {{< copyable "shell-regular" >}} - - ```bash - watch kubectl get po ${cluster_name} -n ${namespace} - ``` - -3. (可选项) 强制下线 Pump - - 如果在下线 Pump 节点时遇到下线失败的情况,即 Pump Pod 长时间未删除,可以强制标注 Pump 状态为 offline。 - - 没有开启 TLS 时,使用下述指令标注状态为 offline。 - - {{< copyable "shell-regular" >}} - - ```shell - kubectl run update-pump-${ordinal_id} --image=pingcap/tidb-binlog:${tidb_version} --namespace=${namespace} --restart=OnFailure -- /binlogctl -pd-urls=http://${cluster_name}-pd:2379 -cmd update-pump -node-id ${cluster_name}-pump-${ordinal_id}:8250 --state offline - ``` - - 如果开启了 TLS,通过下述指令使用前面开启的 pod 来标注状态为 offline。 - - {{< copyable "shell-regular" >}} - - ```shell - kubectl exec binlogctl -n ${namespace} -- /binlogctl -pd-urls=https://${cluster_name}-pd:2379 -cmd update-pump -node-id ${cluster_name}-pump-${ordinal_id}:8250 --state offline -ssl-ca "/etc/binlog-tls/ca.crt" -ssl-cert "/etc/binlog-tls/tls.crt" -ssl-key "/etc/binlog-tls/tls.key" - ``` - -### 完全移除 Pump 节点 - -> **注意:** -> -> 执行如下步骤之前,集群内需要至少存在一个 Pump 节点。如果此时 Pump 节点已经缩容到 0,需要先至少扩容到 1,再进行下面的移除操作。如果需要扩容至 1,使用命令 `kubectl patch tc ${tidb-cluster} -n ${namespace} --type merge -p '{"spec":{"pump":{"replicas": 1}}}'`。 - -1. 移除 Pump 节点前,必须首先需要执行 `kubectl patch tc ${cluster_name} -n ${namespace} --type merge -p '{"spec":{"tidb":{"binlogEnabled": false}}}'`,等待 TiDB Pod 完成重启更新后再移除 Pump 节点。如果直接移除 Pump 节点会导致 TiDB 没有可以写入的 Pump 而无法使用。 -2. 参考[缩容 Pump 节点步骤](#缩容-pump-节点)缩容 Pump 到 0。 -3. `kubectl patch tc ${cluster_name} -n ${namespace} --type json -p '[{"op":"remove", "path":"/spec/pump"}]'` 将 `spec.pump` 部分配置项全部删除。 -4. `kubectl delete sts ${cluster_name}-pump -n ${namespace}` 删除 Pump StatefulSet 资源。 -5. 通过 `kubectl get pvc -n ${namespace} -l app.kubernetes.io/component=pump` 查看 Pump 集群使用过的 PVC,随后使用 `kubectl delete pvc -l app.kubernetes.io/component=pump -n ${namespace}` 指令删除 Pump 的所有 PVC 资源。 - -### 移除 Drainer 节点 - -1. 下线 Drainer 节点: - - 使用下述指令下线 Drainer 节点,`${drainer_node_id}` 为需要下线的 Drainer 的 node ID。如果在 Helm 的 `values.yaml` 中配置了 `drainerName` 选项,则 `${drainer_node_id}` 为 `${drainer_name}-0`,否则 `${drainer_node_id}` 为 `${cluster_name}-${release_name}-drainer-0`。 - - 如果 Drainer 没有开启 TLS,使用下述指令新建 pod 下线 Drainer。 - - {{< copyable "shell-regular" >}} - - ```shell - kubectl run offline-drainer-0 --image=pingcap/tidb-binlog:${tidb_version} --namespace=${namespace} --restart=OnFailure -- /binlogctl -pd-urls=http://${cluster_name}-pd:2379 -cmd offline-drainer -node-id ${drainer_node_id}:8249 - ``` - - 如果 Drainer 开启了 TLS,通过下述指令使用前面开启的 pod 来下线 Drainer。 - - {{< copyable "shell-regular" >}} - - ```shell - kubectl exec binlogctl -n ${namespace} -- /binlogctl -pd-urls "https://${cluster_name}-pd:2379" -cmd offline-drainer -node-id ${drainer_node_id}:8249 -ssl-ca "/etc/binlog-tls/ca.crt" -ssl-cert "/etc/binlog-tls/tls.crt" -ssl-key "/etc/binlog-tls/tls.key" - ``` - - 然后查看 Drainer 的日志输出,输出 `drainer offline, please delete my pod` 后即可确认该节点已经成功下线。 - - {{< copyable "shell-regular" >}} - - ```shell - kubectl logs -f -n ${namespace} ${drainer_node_id} - ``` - -2. 删除对应的 Drainer Pod: - - 运行 `helm uninstall ${release_name} -n ${namespace}` 指令即可删除 Drainer Pod。 - - 如果不再使用 Drainer,使用 `kubectl delete pvc data-${drainer_node_id} -n ${namespace}` 指令删除该 Drainer 的 PVC 资源。 - -3. (可选项) 强制下线 Drainer - - 如果在下线 Drainer 节点时遇到下线失败的情况,即执行下线操作后仍未看到 Drainer pod 输出可以删除 pod 的日志,可以先进行步骤 2 删除 Drainer Pod 后,再运行下述指令标注 Drainer 状态为 offline: - - 没有开启 TLS 时,使用下述指令标注状态为 offline。 - - {{< copyable "shell-regular" >}} - - ```shell - kubectl run update-drainer-${ordinal_id} --image=pingcap/tidb-binlog:${tidb_version} --namespace=${namespace} --restart=OnFailure -- /binlogctl -pd-urls=http://${cluster_name}-pd:2379 -cmd update-drainer -node-id ${drainer_node_id}:8249 --state offline - ``` - - 如果开启了 TLS,通过下述指令使用前面开启的 pod 来下线 Drainer。 - - {{< copyable "shell-regular" >}} - - ```shell - kubectl exec binlogctl -n ${namespace} -- /binlogctl -pd-urls=https://${cluster_name}-pd:2379 -cmd update-drainer -node-id ${drainer_node_id}:8249 --state offline -ssl-ca "/etc/binlog-tls/ca.crt" -ssl-cert "/etc/binlog-tls/tls.crt" -ssl-key "/etc/binlog-tls/tls.key" - ``` diff --git a/zh/deploy-tidb-cluster-across-multiple-kubernetes.md b/zh/deploy-tidb-cluster-across-multiple-kubernetes.md index d4487431c..044512d71 100644 --- a/zh/deploy-tidb-cluster-across-multiple-kubernetes.md +++ b/zh/deploy-tidb-cluster-across-multiple-kubernetes.md @@ -513,9 +513,8 @@ EOF 2. 如果集群中部署了 TiProxy,为所有部署了 TiProxy 的 Kubernetes 集群升级 TiProxy 版本。 3. 如果集群中部署了 TiFlash,为所有部署了 TiFlash 的 Kubernetes 集群升级 TiFlash 版本。 4. 升级所有 Kubernetes 集群的 TiKV 版本。 - 5. 如果集群中部署了 Pump,为所有部署了 Pump 的 Kubernetes 集群升级 Pump 版本。 - 6. 升级所有 Kubernetes 集群的 TiDB 版本。 - 7. 如果集群中部署了 TiCDC,为所有部署了 TiCDC 的 Kubernetes 集群升级 TiCDC 版本。 + 5. 升级所有 Kubernetes 集群的 TiDB 版本。 + 6. 如果集群中部署了 TiCDC,为所有部署了 TiCDC 的 Kubernetes 集群升级 TiCDC 版本。 ## 退出和回收已加入的 TidbCluster @@ -523,7 +522,7 @@ EOF - 缩容后,集群中 TiKV 副本数应大于 PD 中设置的 `max-replicas` 数量,默认情况下 TiKV 副本数量需要大于 3。 -以上面文档创建的第二个 TidbCluster 为例,先将 PD、TiKV、TiDB 的副本数设置为 0,如果开启了 TiFlash、TiCDC、TiProxy、Pump 等其他组件,也请一并将其副本数设为 `0`: +以上面文档创建的第二个 TidbCluster 为例,先将 PD、TiKV、TiDB 的副本数设置为 0,如果开启了 TiFlash、TiCDC、TiProxy 等其他组件,也请一并将其副本数设为 `0`: > **注意:** > diff --git a/zh/enable-tls-between-components.md b/zh/enable-tls-between-components.md index 430b4878a..e56f42978 100644 --- a/zh/enable-tls-between-components.md +++ b/zh/enable-tls-between-components.md @@ -9,7 +9,7 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/enable-tls-between-components/'] 本文主要描述了在 Kubernetes 上如何为 TiDB 集群组件间开启 TLS。TiDB Operator 从 v1.1 开始已经支持为 Kubernetes 上 TiDB 集群组件间开启 TLS。开启步骤为: 1. 为即将被创建的 TiDB 集群的每个组件生成证书: - - 为 PD/TiKV/TiDB/Pump/Drainer/TiFlash/TiProxy/TiKV Importer/TiDB Lightning 组件分别创建一套 Server 端证书,保存为 Kubernetes Secret 对象:`${cluster_name}-${component_name}-cluster-secret` + - 为 PD/TiKV/TiDB/TiFlash/TiProxy/TiDB Lightning 组件分别创建一套 Server 端证书,保存为 Kubernetes Secret 对象:`${cluster_name}-${component_name}-cluster-secret` - 为它们的各种客户端创建一套共用的 Client 端证书,保存为 Kubernetes Secret 对象:`${cluster_name}-cluster-client-secret` > **注意:** @@ -277,117 +277,6 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/enable-tls-between-components/'] cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal tidb-server.json | cfssljson -bare tidb-server ``` - - Pump Server 端证书 - - 首先生成默认的 `pump-server.json` 文件: - - {{< copyable "shell-regular" >}} - - ``` shell - cfssl print-defaults csr > pump-server.json - ``` - - 然后编辑这个文件,修改 `CN`,`hosts` 属性: - - ``` json - ... - "CN": "TiDB", - "hosts": [ - "127.0.0.1", - "::1", - "*.${cluster_name}-pump", - "*.${cluster_name}-pump.${namespace}", - "*.${cluster_name}-pump.${namespace}.svc" - ], - ... - ``` - - 其中 `${cluster_name}` 为集群的名字,`${namespace}` 为 TiDB 集群部署的命名空间,用户也可以添加自定义 `hosts`。 - - 最后生成 Pump Server 端证书: - - {{< copyable "shell-regular" >}} - - ``` shell - cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal pump-server.json | cfssljson -bare pump-server - ``` - - - Drainer Server 端证书 - - 首先生成默认的 `drainer-server.json` 文件: - - {{< copyable "shell-regular" >}} - - ``` shell - cfssl print-defaults csr > drainer-server.json - ``` - - 然后编辑这个文件,修改 `CN`,`hosts` 属性: - - ``` json - ... - "CN": "TiDB", - "hosts": [ - "127.0.0.1", - "::1", - "" - ], - ... - ``` - - 现在 Drainer 组件是通过 Helm 来部署的,根据 `values.yaml` 文件配置方式不同,所需要填写的 `hosts` 字段也不相同。 - - 如果部署的时候设置 `drainerName` 属性,像下面这样: - - ``` yaml - ... - # Change the name of the statefulset and pod - # The default is clusterName-ReleaseName-drainer - # Do not change the name of an existing running drainer: this is unsupported. - drainerName: my-drainer - ... - ``` - - 那么就这样配置 `hosts` 属性: - - ``` json - ... - "CN": "TiDB", - "hosts": [ - "127.0.0.1", - "::1", - "*.${drainer_name}", - "*.${drainer_name}.${namespace}", - "*.${drainer_name}.${namespace}.svc" - ], - ... - ``` - - 如果部署的时候没有设置 `drainerName` 属性,需要这样配置 `hosts` 属性: - - ``` json - ... - "CN": "TiDB", - "hosts": [ - "127.0.0.1", - "::1", - "*.${cluster_name}-${release_name}-drainer", - "*.${cluster_name}-${release_name}-drainer.${namespace}", - "*.${cluster_name}-${release_name}-drainer.${namespace}.svc" - ], - ... - ``` - - 其中 `${cluster_name}` 为集群的名字,`${namespace}` 为 TiDB 集群部署的命名空间,`${release_name}` 是 `helm install` 时候填写的 `release name`,`${drainer_name}` 为 `values.yaml` 文件里的 `drainerName`,用户也可以添加自定义 `hosts`。 - - 最后生成 Drainer Server 端证书: - - {{< copyable "shell-regular" >}} - - ``` shell - cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal drainer-server.json | cfssljson -bare drainer-server - ``` - - TiCDC Server 端证书 首先生成默认的 `ticdc-server.json` 文件: @@ -507,46 +396,6 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/enable-tls-between-components/'] cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal tiflash-server.json | cfssljson -bare tiflash-server ``` - - TiKV Importer Server 端证书 - - 如需要[使用 TiDB Lightning 恢复 Kubernetes 上的集群数据](restore-data-using-tidb-lightning.md),则需要为其中的 TiKV Importer 组件生成如下的 Server 端证书。 - - 首先生成默认的 `importer-server.json` 文件: - - {{< copyable "shell-regular" >}} - - ```shell - cfssl print-defaults csr > importer-server.json - ``` - - 然后编辑这个文件,修改 `CN`、`hosts` 属性: - - ```json - ... - "CN": "TiDB", - "hosts": [ - "127.0.0.1", - "::1", - "${cluster_name}-importer", - "${cluster_name}-importer.${namespace}", - "${cluster_name}-importer.${namespace}.svc", - "*.${cluster_name}-importer", - "*.${cluster_name}-importer.${namespace}", - "*.${cluster_name}-importer.${namespace}.svc" - ], - ... - ``` - - 其中 `${cluster_name}` 为集群的名字,`${namespace}` 为 TiDB 集群部署的命名空间,用户也可以添加自定义 `hosts`。 - - 最后生成 TiKV Importer Server 端证书: - - {{< copyable "shell-regular" >}} - - ``` shell - cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal importer-server.json | cfssljson -bare importer-server - ``` - - TiDB Lightning Server 端证书 如需要[使用 TiDB Lightning 恢复 Kubernetes 上的集群数据](restore-data-using-tidb-lightning.md),则需要为其中的 TiDB Lightning 组件生成如下的 Server 端证书。 @@ -637,22 +486,6 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/enable-tls-between-components/'] kubectl create secret generic ${cluster_name}-tidb-cluster-secret --namespace=${namespace} --from-file=tls.crt=tidb-server.pem --from-file=tls.key=tidb-server-key.pem --from-file=ca.crt=ca.pem ``` - Pump 集群证书 Secret: - - {{< copyable "shell-regular" >}} - - ``` shell - kubectl create secret generic ${cluster_name}-pump-cluster-secret --namespace=${namespace} --from-file=tls.crt=pump-server.pem --from-file=tls.key=pump-server-key.pem --from-file=ca.crt=ca.pem - ``` - - Drainer 集群证书 Secret: - - {{< copyable "shell-regular" >}} - - ``` shell - kubectl create secret generic ${cluster_name}-drainer-cluster-secret --namespace=${namespace} --from-file=tls.crt=drainer-server.pem --from-file=tls.key=drainer-server-key.pem --from-file=ca.crt=ca.pem - ``` - TiCDC 集群证书 Secret: {{< copyable "shell-regular" >}} @@ -675,14 +508,6 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/enable-tls-between-components/'] kubectl create secret generic ${cluster_name}-tiflash-cluster-secret --namespace=${namespace} --from-file=tls.crt=tiflash-server.pem --from-file=tls.key=tiflash-server-key.pem --from-file=ca.crt=ca.pem ``` - TiKV Importer 集群证书 Secret: - - {{< copyable "shell-regular" >}} - - ``` shell - kubectl create secret generic ${cluster_name}-importer-cluster-secret --namespace=${namespace} --from-file=tls.crt=importer-server.pem --from-file=tls.key=importer-server-key.pem --from-file=ca.crt=ca.pem - ``` - TiDB Lightning 集群证书 Secret: {{< copyable "shell-regular" >}} @@ -699,7 +524,7 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/enable-tls-between-components/'] kubectl create secret generic ${cluster_name}-cluster-client-secret --namespace=${namespace} --from-file=tls.crt=client.pem --from-file=tls.key=client-key.pem --from-file=ca.crt=ca.pem ``` - 这里给 PD/TiKV/TiDB/Pump/Drainer 的 Server 端证书分别创建了一个 Secret 供他们启动时加载使用,另外一套 Client 端证书供他们的客户端连接使用。 + 这里给 PD/TiKV/TiDB 的 Server 端证书分别创建了一个 Secret 供他们启动时加载使用,另外一套 Client 端证书供他们的客户端连接使用。 ### 使用 `cert-manager` 系统颁发证书 @@ -956,146 +781,6 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/enable-tls-between-components/'] 创建这个对象以后,`cert-manager` 会生成一个名字为 `${cluster_name}-tidb-cluster-secret` 的 Secret 对象供 TiDB 集群的 TiDB 组件使用。 - - Pump 组件的 Server 端证书。 - - ``` yaml - apiVersion: cert-manager.io/v1 - kind: Certificate - metadata: - name: ${cluster_name}-pump-cluster-secret - namespace: ${namespace} - spec: - secretName: ${cluster_name}-pump-cluster-secret - duration: 8760h # 365d - renewBefore: 360h # 15d - subject: - organizations: - - PingCAP - commonName: "TiDB" - usages: - - server auth - - client auth - dnsNames: - - "*.${cluster_name}-pump" - - "*.${cluster_name}-pump.${namespace}" - - "*.${cluster_name}-pump.${namespace}.svc" - ipAddresses: - - 127.0.0.1 - - ::1 - issuerRef: - name: ${cluster_name}-tidb-issuer - kind: Issuer - group: cert-manager.io - ``` - - 其中 `${cluster_name}` 为集群的名字: - - - `spec.secretName` 请设置为 `${cluster_name}-pump-cluster-secret`; - - `usages` 请添加上 `server auth` 和 `client auth`; - - `dnsNames` 需要填写这些 DNS,根据需要可以填写其他 DNS: - - `*.${cluster_name}-pump` - - `*.${cluster_name}-pump.${namespace}` - - `*.${cluster_name}-pump.${namespace}.svc` - - `ipAddresses` 需要填写这两个 IP ,根据需要可以填写其他 IP: - - `127.0.0.1` - - `::1` - - `issuerRef` 请填写上面创建的 Issuer; - - 其他属性请参考 [cert-manager API](https://cert-manager.io/docs/reference/api-docs/#cert-manager.io/v1.CertificateSpec)。 - - 创建这个对象以后,`cert-manager` 会生成一个名字为 `${cluster_name}-pump-cluster-secret` 的 Secret 对象供 TiDB 集群的 Pump 组件使用。 - - - Drainer 组件的 Server 端证书。 - - 现在 Drainer 组件是通过 Helm 来部署的,根据 `values.yaml` 文件配置方式不同,所需要填写的 `dnsNames` 字段也不相同。 - - 如果部署的时候设置 `drainerName` 属性,像下面这样: - - ``` yaml - ... - # Change the name of the statefulset and pod - # The default is clusterName-ReleaseName-drainer - # Do not change the name of an existing running drainer: this is unsupported. - drainerName: my-drainer - ... - ``` - - 那么就需要这样配置证书: - - ``` yaml - apiVersion: cert-manager.io/v1 - kind: Certificate - metadata: - name: ${cluster_name}-drainer-cluster-secret - namespace: ${namespace} - spec: - secretName: ${cluster_name}-drainer-cluster-secret - duration: 8760h # 365d - renewBefore: 360h # 15d - subject: - organizations: - - PingCAP - commonName: "TiDB" - usages: - - server auth - - client auth - dnsNames: - - "*.${drainer_name}" - - "*.${drainer_name}.${namespace}" - - "*.${drainer_name}.${namespace}.svc" - ipAddresses: - - 127.0.0.1 - - ::1 - issuerRef: - name: ${cluster_name}-tidb-issuer - kind: Issuer - group: cert-manager.io - ``` - - 如果部署的时候没有设置 `drainerName` 属性,需要这样配置 `dnsNames` 属性: - - ``` yaml - apiVersion: cert-manager.io/v1 - kind: Certificate - metadata: - name: ${cluster_name}-drainer-cluster-secret - namespace: ${namespace} - spec: - secretName: ${cluster_name}-drainer-cluster-secret - duration: 8760h # 365d - renewBefore: 360h # 15d - subject: - organizations: - - PingCAP - commonName: "TiDB" - usages: - - server auth - - client auth - dnsNames: - - "*.${cluster_name}-${release_name}-drainer" - - "*.${cluster_name}-${release_name}-drainer.${namespace}" - - "*.${cluster_name}-${release_name}-drainer.${namespace}.svc" - ipAddresses: - - 127.0.0.1 - - ::1 - issuerRef: - name: ${cluster_name}-tidb-issuer - kind: Issuer - group: cert-manager.io - ``` - - 其中 `${cluster_name}` 为集群的名字,`${namespace}` 为 TiDB 集群部署的命名空间,`${release_name}` 是 `helm install` 时候填写的 `release name`,`${drainer_name}` 为 `values.yaml` 文件里的 `drainerName`,用户也可以添加自定义 `dnsNames`。 - - - `spec.secretName` 请设置为 `${cluster_name}-drainer-cluster-secret`; - - `usages` 请添加上 `server auth` 和 `client auth`; - - `dnsNames` 请参考上面的描述; - - `ipAddresses` 需要填写这两个 IP ,根据需要可以填写其他 IP: - - `127.0.0.1` - - `::1` - - `issuerRef` 请填写上面创建的 Issuer; - - 其他属性请参考 [cert-manager API](https://cert-manager.io/docs/reference/api-docs/#cert-manager.io/v1.CertificateSpec)。 - - 创建这个对象以后,`cert-manager` 会生成一个名字为 `${cluster_name}-drainer-cluster-secret` 的 Secret 对象供 TiDB 集群的 Drainer 组件使用。 - - TiCDC 组件的 Server 端证书。 TiCDC 从 v4.0.3 版本开始支持 TLS,TiDB Operator v1.1.3 版本同步支持 TiCDC 开启 TLS 功能。 @@ -1218,59 +903,6 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/enable-tls-between-components/'] 创建这个对象以后,`cert-manager` 会生成一个名字为 `${cluster_name}-tiflash-cluster-secret` 的 Secret 对象供 TiDB 集群的 TiFlash 组件使用。 - - TiKV Importer 组件的 Server 端证书。 - - 如需要[使用 TiDB Lightning 恢复 Kubernetes 上的集群数据](restore-data-using-tidb-lightning.md),则需要为其中的 TiKV Importer 组件生成如下的 Server 端证书。 - - ```yaml - apiVersion: cert-manager.io/v1 - kind: Certificate - metadata: - name: ${cluster_name}-importer-cluster-secret - namespace: ${namespace} - spec: - secretName: ${cluster_name}-importer-cluster-secret - duration: 8760h # 365d - renewBefore: 360h # 15d - subject: - organizations: - - PingCAP - commonName: "TiDB" - usages: - - server auth - - client auth - dnsNames: - - "${cluster_name}-importer" - - "${cluster_name}-importer.${namespace}" - - "${cluster_name}-importer.${namespace}.svc" - - "*.${cluster_name}-importer" - - "*.${cluster_name}-importer.${namespace}" - - "*.${cluster_name}-importer.${namespace}.svc" - ipAddresses: - - 127.0.0.1 - - ::1 - issuerRef: - name: ${cluster_name}-tidb-issuer - kind: Issuer - group: cert-manager.io - ``` - - 其中 `${cluster_name}` 为集群的名字: - - - `spec.secretName` 请设置为 `${cluster_name}-importer-cluster-secret`; - - `usages` 请添加上 `server auth` 和 `client auth`; - - `dnsNames` 需要填写这些 DNS,根据需要可以填写其他 DNS: - - `${cluster_name}-importer` - - `${cluster_name}-importer.${namespace}` - - `${cluster_name}-importer.${namespace}.svc` - - `ipAddresses` 需要填写这两个 IP ,根据需要可以填写其他 IP: - - `127.0.0.1` - - `::1` - - `issuerRef` 请填写上面创建的 Issuer; - - 其他属性请参考 [cert-manager API](https://cert-manager.io/docs/reference/api-docs/#cert-manager.io/v1.CertificateSpec)。 - - 创建这个对象以后,`cert-manager` 会生成一个名字为 `${cluster_name}-importer-cluster-secret` 的 Secret 对象供 TiDB 集群的 TiKV Importer 组件使用。 - - TiDB Lightning 组件的 Server 端证书。 如需要[使用 TiDB Lightning 恢复 Kubernetes 上的集群数据](restore-data-using-tidb-lightning.md),则需要为其中的 TiDB Lightning 组件生成如下的 Server 端证书。 @@ -1368,9 +1000,8 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/enable-tls-between-components/'] - 创建一套 TiDB 集群 - 为 TiDB 组件间开启 TLS,并开启 CN 验证 - 部署一套监控系统 -- 部署 Pump 组件,并开启 CN 验证 -1. 创建一套 TiDB 集群(监控系统和 Pump 组件已包含在内): +1. 创建一套 TiDB 集群(监控系统组件已包含在内): 创建 `tidb-cluster.yaml` 文件: @@ -1418,15 +1049,6 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/enable-tls-between-components/'] security: cluster-verify-cn: - TiDB - pump: - baseImage: pingcap/tidb-binlog - replicas: 1 - requests: - storage: "100Gi" - config: - security: - cert-allowed-cn: - - TiDB --- apiVersion: pingcap.com/v1alpha1 kind: TidbMonitor @@ -1485,52 +1107,7 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/enable-tls-between-components/'] > - TiDB > ``` -2. 创建 Drainer 组件并开启 TLS 以及 CN 验证。 - - - 第一种方式:创建 Drainer 的时候设置 `drainerName`: - - 编辑 values.yaml 文件,设置好 drainer-name,并将 TLS 功能打开: - - ``` yaml - ... - drainerName: ${drainer_name} - tlsCluster: - enabled: true - certAllowedCN: - - TiDB - ... - ``` - - 然后部署 Drainer 集群: - - {{< copyable "shell-regular" >}} - - ``` shell - helm install ${release_name} pingcap/tidb-drainer --namespace=${namespace} --version=${helm_version} -f values.yaml - ``` - - - 第二种方式:创建 Drainer 的时候不设置 `drainerName`: - - 编辑 values.yaml 文件,将 TLS 功能打开: - - ``` yaml - ... - tlsCluster: - enabled: true - certAllowedCN: - - TiDB - ... - ``` - - 然后部署 Drainer 集群: - - {{< copyable "shell-regular" >}} - - ``` shell - helm install ${release_name} pingcap/tidb-drainer --namespace=${namespace} --version=${helm_version} -f values.yaml - ``` - -3. 创建 Backup/Restore 资源对象。 +2. 创建 Backup/Restore 资源对象。 - 创建 `backup.yaml` 文件: diff --git a/zh/modify-tidb-configuration.md b/zh/modify-tidb-configuration.md index ecd9737cf..6170eb289 100644 --- a/zh/modify-tidb-configuration.md +++ b/zh/modify-tidb-configuration.md @@ -13,7 +13,7 @@ TiDB 集群自身支持通过 SQL 对 TiDB、TiKV、PD 等组件进行[在线配 对于 TiDB 和 TiKV,如果通过 SQL 进行[在线配置变更](https://docs.pingcap.com/zh/tidb/stable/dynamic-config),在升级或者重启后,配置项会被 TidbCluster CR 中的配置项覆盖,导致在线变更的配置失效。因此,如果需要持久化修改配置,你需要在 TidbCluster CR 中直接修改配置项。 -对于 TiFlash、TiProxy、TiCDC 和 Pump,目前只能通过在 TidbCluster CR 中修改配置项。 +对于 TiFlash、TiProxy 和 TiCDC,目前只能通过在 TidbCluster CR 中修改配置项。 在 TidbCluster CR 中修改配置项的步骤如下: diff --git a/zh/releases/release-1.0.4.md b/zh/releases/release-1.0.4.md index 97b6b9581..ceb9d69ee 100644 --- a/zh/releases/release-1.0.4.md +++ b/zh/releases/release-1.0.4.md @@ -31,7 +31,7 @@ There is no action required if you are upgrading from [v1.0.3](release-1.0.3.md) New Helm chart `tidb-lightning` brings [TiDB Lightning](https://docs.pingcap.com/tidb/stable/tidb-lightning-overview) support for TiDB on Kubernetes. Check out the [document](../restore-data-using-tidb-lightning.md) for detailed user guide. -Another new Helm chart `tidb-drainer` brings multiple drainers support for TiDB Binlog on Kubernetes. Check out the [document](../deploy-tidb-binlog.md) for detailed user guide. +Another new Helm chart `tidb-drainer` brings multiple drainers support for TiDB Binlog on Kubernetes. ### Improvements diff --git a/zh/restore-from-aws-s3-by-snapshot.md b/zh/restore-from-aws-s3-by-snapshot.md index ce5cbb8ec..7bcdebd6c 100644 --- a/zh/restore-from-aws-s3-by-snapshot.md +++ b/zh/restore-from-aws-s3-by-snapshot.md @@ -13,7 +13,7 @@ summary: 介绍如何将存储在 S3 上的备份元数据以及 EBS 卷快照 - 要使用此功能,TiDB Operator 应为 v1.4.0 及以上,TiDB 应为 v6.3.0 及以上。 - 只支持相同 TiKV 节点个数以及卷配置的恢复。即恢复集群 TiKV 个数以及卷相关的配置需要和备份集群的完全一致。 -- 暂不支持 TiFlash, CDC,DM 和 binlog 相关节点的卷快照恢复 +- 暂不支持 TiFlash, CDC 和 DM 相关节点的卷快照恢复 - 目前 restore 仅支持 gp3 默认配置 (3000IOPS/125 MB) 进行恢复, 如需其他配置可指定卷类型或者配置进行恢复,如:`--volume-type=io2`,`--volume-iops=7000`,`--volume-throughput=400` ```yaml diff --git a/zh/restore-from-gcs.md b/zh/restore-from-gcs.md index ea50bdaa3..e092a7229 100644 --- a/zh/restore-from-gcs.md +++ b/zh/restore-from-gcs.md @@ -10,7 +10,7 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/restore-from-gcs/'] 本文使用的恢复方式基于 TiDB Operator v1.1 及以上的 CustomResourceDefinition (CRD) 实现,底层通过使用 [TiDB Lightning TiDB-backend](https://docs.pingcap.com/zh/tidb/stable/tidb-lightning-backends#tidb-lightning-tidb-backend) 来恢复数据。 -TiDB Lightning 是一款将全量数据高速导入到 TiDB 集群的工具,可用于从本地盘、Google Cloud Storage (GCS) 或 Amazon S3 云盘读取数据。目前,TiDB Lightning 支持三种后端:`Importer-backend`、`Local-backend`、`TiDB-backend`。本文介绍的方法使用 `TiDB-backend`。关于这三种后端的区别和选择,请参阅 [TiDB Lightning 文档](https://docs.pingcap.com/zh/tidb/stable/tidb-lightning-backends)。如果要使用 `Importer-backend` 或者 `Local-backend` 导入数据,请参阅[使用 TiDB Lightning 导入集群数据](restore-data-using-tidb-lightning.md)。 +TiDB Lightning 是一款将全量数据高速导入到 TiDB 集群的工具,可用于从本地盘、Google Cloud Storage (GCS) 或 Amazon S3 云盘读取数据。目前,TiDB Lightning 支持两种后端:`Local-backend`、`TiDB-backend`。本文介绍的方法使用 `TiDB-backend`。关于这三种后端的区别和选择,请参阅 [TiDB Lightning 文档](https://docs.pingcap.com/zh/tidb/stable/tidb-lightning-backends)。如果要使用 `Local-backend` 导入数据,请参阅[使用 TiDB Lightning 导入集群数据](restore-data-using-tidb-lightning.md)。 以下示例将存储在 [GCS](https://cloud.google.com/storage/docs/) 上指定路径上的集群备份数据恢复到 TiDB 集群。 diff --git a/zh/restore-from-s3.md b/zh/restore-from-s3.md index 947d7e05c..97c2a8933 100644 --- a/zh/restore-from-s3.md +++ b/zh/restore-from-s3.md @@ -10,7 +10,7 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/restore-from-s3/'] 本文使用的恢复方式基于 TiDB Operator v1.1 及以上的 CustomResourceDefinition (CRD) 实现,底层通过使用 [TiDB Lightning TiDB-backend](https://docs.pingcap.com/zh/tidb/stable/tidb-lightning-backends#tidb-lightning-tidb-backend) 来恢复数据。 -TiDB Lightning 是一款将全量数据高速导入到 TiDB 集群的工具,可用于从本地盘、Google Cloud Storage (GCS) 或 Amazon S3 云盘读取数据。目前,TiDB Lightning 支持三种后端:`Importer-backend`、`Local-backend`、`TiDB-backend`。本文介绍的方法使用 `TiDB-backend`。关于这三种后端的区别和选择,请参阅 [TiDB Lightning 文档](https://docs.pingcap.com/zh/tidb/stable/tidb-lightning-backends)。如果要使用 `Importer-backend` 或者 `Local-backend` 导入数据,请参阅[使用 TiDB Lightning 导入集群数据](restore-data-using-tidb-lightning.md)。 +TiDB Lightning 是一款将全量数据高速导入到 TiDB 集群的工具,可用于从本地盘、Google Cloud Storage (GCS) 或 Amazon S3 云盘读取数据。目前,TiDB Lightning 支持两种后端:`Local-backend`、`TiDB-backend`。本文介绍的方法使用 `TiDB-backend`。关于这三种后端的区别和选择,请参阅 [TiDB Lightning 文档](https://docs.pingcap.com/zh/tidb/stable/tidb-lightning-backends)。如果要使用 `Local-backend` 导入数据,请参阅[使用 TiDB Lightning 导入集群数据](restore-data-using-tidb-lightning.md)。 以下示例将兼容 S3 的存储(指定路径)上的备份数据恢复到 TiDB 集群。 diff --git a/zh/suspend-tidb-cluster.md b/zh/suspend-tidb-cluster.md index e6bd9b1fb..f66df22e9 100644 --- a/zh/suspend-tidb-cluster.md +++ b/zh/suspend-tidb-cluster.md @@ -58,7 +58,6 @@ summary: 了解如何通过配置挂起 Kubernetes 上的 TiDB 集群。 * TiFlash * TiCDC * TiKV - * Pump * TiProxy * PD diff --git a/zh/tidb-toolkit.md b/zh/tidb-toolkit.md index 3e7a6b1cb..4632a9250 100644 --- a/zh/tidb-toolkit.md +++ b/zh/tidb-toolkit.md @@ -177,10 +177,7 @@ version.BuildInfo{Version:"v3.4.1", GitCommit:"c4e74854886b2efe3321e185578e6db9b Kubernetes 应用在 Helm 中被打包为 chart。PingCAP 针对 Kubernetes 上的 TiDB 部署运维提供了多个 Helm chart: * `tidb-operator`:用于部署 TiDB Operator; -* `tidb-cluster`:用于部署 TiDB 集群; -* `tidb-backup`:用于 TiDB 集群备份恢复; * `tidb-lightning`:用于 TiDB 集群导入数据; -* `tidb-drainer`:用于部署 TiDB Drainer; 这些 chart 都托管在 PingCAP 维护的 helm chart 仓库 `https://charts.pingcap.org/` 中,你可以通过下面的命令添加该仓库: @@ -200,9 +197,6 @@ helm search repo pingcap ``` NAME CHART VERSION APP VERSION DESCRIPTION -pingcap/tidb-backup v1.6.1 A Helm chart for TiDB Backup or Restore -pingcap/tidb-cluster v1.6.1 A Helm chart for TiDB Cluster -pingcap/tidb-drainer v1.6.1 A Helm chart for TiDB Binlog drainer. pingcap/tidb-lightning v1.6.1 A Helm chart for TiDB Lightning pingcap/tidb-operator v1.6.1 v1.6.1 tidb-operator Helm chart for Kubernetes ``` @@ -265,7 +259,6 @@ helm uninstall ${release_name} -n ${namespace} ```shell wget http://charts.pingcap.org/tidb-operator-v1.6.1.tgz -wget http://charts.pingcap.org/tidb-drainer-v1.6.1.tgz wget http://charts.pingcap.org/tidb-lightning-v1.6.1.tgz ``` diff --git a/zh/upgrade-a-tidb-cluster.md b/zh/upgrade-a-tidb-cluster.md index e291272e3..aeda91817 100644 --- a/zh/upgrade-a-tidb-cluster.md +++ b/zh/upgrade-a-tidb-cluster.md @@ -52,12 +52,12 @@ Kubernetes 提供了[滚动更新功能](https://kubernetes.io/docs/tutorials/ku kubectl edit tc ${cluster_name} -n ${namespace} ``` - 正常情况下,集群内的各组件应该使用相同版本,所以一般修改 `spec.version` 即可。如果要为集群内不同组件设置不同的版本,可以修改 `spec..version`。 + 正常情况下,集群内的各组件应该使用相同版本,所以一般修改 `spec.version` 即可。如果要为集群内不同组件设置不同的版本,可以修改 `spec..version`。 `version` 字段格式如下: - `spec.version`,格式为 `imageTag`,例如 `v5.3`。 - - `spec..version`,格式为 `imageTag`,例如 `v3.1.0`。 + - `spec..version`,格式为 `imageTag`,例如 `v3.1.0`。 2. 查看升级进度: diff --git a/zh/use-auto-failover.md b/zh/use-auto-failover.md index 8bc852858..36f7011d7 100644 --- a/zh/use-auto-failover.md +++ b/zh/use-auto-failover.md @@ -39,7 +39,7 @@ controllerManager: ## 实现原理 -TiDB 集群包括 PD、TiKV、TiDB、TiFlash、TiCDC 和 Pump 六个组件。目前 TiCDC 和 Pump 并不支持故障自动转移,PD、TiKV、TiDB 和 TiFlash 的故障转移策略有所不同,本节将详细介绍这几种策略。 +TiDB 集群包括 PD、TiKV、TiDB、TiFlash 和 TiCDC 五个组件。目前 TiCDC 并不支持故障自动转移,PD、TiKV、TiDB 和 TiFlash 的故障转移策略有所不同,本节将详细介绍这几种策略。 ### PD 故障转移策略