暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

TiDB群集扩容、缩容及群集重命名

原创 键盘丐 2022-03-29
848

实验目的

    本次实验将对TiDB群集增加、删除TiKV节点,再对TiDB群集进行重命名

一、TiDB群集扩容

  • 查看群集状态,群集tidb-test有12个节点,3个TiKV节点


[root@node1 opt]# tiup cluster display tidb-test
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.1/tiup-cluster display tidb-test
Cluster type:       tidb
Cluster name:       tidb-test
Cluster version:    v5.2.1
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://192.168.59.29:2379/dashboard
ID                   Role          Host           Ports        OS/Arch       Status  Data Dir                      Deploy Dir
--                   ----          ----           -----        -------       ------  --------                      ----------
192.168.59.26:9093   alertmanager  192.168.59.26  9093/9094    linux/x86_64  Up      /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
192.168.59.26:3000   grafana       192.168.59.26  3000         linux/x86_64  Up      -                             /tidb-deploy/grafana-3000
192.168.59.27:2379   pd            192.168.59.27  2379/2380    linux/x86_64  Up      /tidb-data/pd-2379            /tidb-deploy/pd-2379
192.168.59.28:2379   pd            192.168.59.28  2379/2380    linux/x86_64  Up|L    /tidb-data/pd-2379            /tidb-deploy/pd-2379
192.168.59.29:2379   pd            192.168.59.29  2379/2380    linux/x86_64  Up|UI   /tidb-data/pd-2379            /tidb-deploy/pd-2379
192.168.59.26:9090   prometheus    192.168.59.26  9090         linux/x86_64  Up      /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
192.168.59.27:4000   tidb          192.168.59.27  4000/10080   linux/x86_64  Up      -                             /tidb-deploy/tidb-4000
192.168.59.28:4000   tidb          192.168.59.28  4000/10080   linux/x86_64  Up      -                             /tidb-deploy/tidb-4000
192.168.59.29:4000   tidb          192.168.59.29  4000/10080   linux/x86_64  Up      -                             /tidb-deploy/tidb-4000
192.168.59.27:20160  tikv          192.168.59.27  20160/20180  linux/x86_64  Up      /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
192.168.59.28:20160  tikv          192.168.59.28  20160/20180  linux/x86_64  Up      /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
192.168.59.29:20160  tikv          192.168.59.29  20160/20180  linux/x86_64  Up      /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
Total nodes: 12
  • 编辑扩容文件scale-out-tikv.yaml,写入增加节点的主机相关信息,文件内容如下,注意每行及符号后的空格

[root@node1 ~]# cat scale-out-tikv.yaml
tikv_servers:
 - host: 192.168.59.24
   ssh_port: 22
   port: 20160
   status_port: 20180
   deploy_dir: "/tidb-deploy/tikv-20160"
   data_dir: "/tidb-data/tikv-20160"
   log_dir: "/tidb-deploy/tikv-20160/log"
  • 使用tiup cluster scale-out命令进行扩容,可以看到扩容后增加了一个tikv节点,总节点数变为13

[root@node1 ~]# tiup cluster scale-out tidb-test scale-out-tikv.yaml -uroot -p
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.1/tiup-cluster scale-out tidb-test scale-out-tikv.yaml -uroot -p
Input SSH password:

+ Detect CPU Arch
  - Detecting node 192.168.59.24 ... Done
Please confirm your topology:
Cluster type:    tidb
Cluster name:    tidb-test
Cluster version: v5.2.1
Role  Host           Ports        OS/Arch       Directories
----  ----           -----        -------       -----------
tikv  192.168.59.24  20160/20180  linux/x86_64  /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.28
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.28
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.29
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.27
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.28
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.29
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.26
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.26
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.26
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.27
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.29
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.27


  - Download tikv:v5.2.1 (linux/amd64) ... Done
+ [ Serial ] - RootSSH: user=root, host=192.168.59.24, port=22
+ [ Serial ] - EnvInit: user=tidb, host=192.168.59.24
+ [ Serial ] - Mkdir: host=192.168.59.24, directories='/tidb-deploy','/tidb-data'

+ [ Serial ] - UserSSH: user=tidb, host=192.168.59.24
+ [ Serial ] - Mkdir: host=192.168.59.24, directories='/tidb-deploy/tikv-20160','/tidb-deploy/tikv-20160/bin','/tidb-deploy/tikv-20160/conf','/tidb-deploy/tikv-20160/scripts'
  - Copy node_exporter -> 192.168.59.24 ...  Mkdir: host=192.168.59.24, directories='/tidb-deploy/monitor-9100','/tidb-data/monitor-9100','/tidb-d...
  - Copy node_exporter -> 192.168.59.24 ...  Mkdir: host=192.168.59.24, directories='/tidb-deploy/monitor-9100','/tidb-data/monitor-9100','/tidb-d...
  - Copy blackbox_exporter -> 192.168.59.24 ...  Mkdir: host=192.168.59.24, directories='/tidb-deploy/monitor-9100','/tidb-data/monitor-9100','/ti...
  - Copy node_exporter -> 192.168.59.24 ... Done
+ [ Serial ] - ScaleConfig: cluster=tidb-test, user=tidb, host=192.168.59.24, service=tikv-20160.service, deploy_dir=/tidb-deploy/tikv-20160, data_dir=[/tidb-data/tikv-20160], log_dir=/tidb-deploy/tikv-20160/log, cache_dir=
+ Check status
Enabling component tikv
        Enabling instance 192.168.59.24:20160
        Enable instance 192.168.59.24:20160 success
Enabling component node_exporter
        Enabling instance 192.168.59.24
        Enable 192.168.59.24 success
Enabling component blackbox_exporter
        Enabling instance 192.168.59.24
        Enable 192.168.59.24 success
+ [ Serial ] - Save meta
+ [ Serial ] - StartCluster
Starting component tikv
        Starting instance 192.168.59.24:20160
        Start instance 192.168.59.24:20160 success
Starting component node_exporter
        Starting instance 192.168.59.24
        Start 192.168.59.24 success
Starting component blackbox_exporter
        Starting instance 192.168.59.24
        Start 192.168.59.24 success
+ [ Serial ] - InitConfig: cluster=tidb-test, user=tidb, host=192.168.59.28, path=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache/tikv-20160.service, deploy_dir=/tidb-deploy/tikv-20160, data_dir=[/tidb-data/tikv-20160], log_dir=/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache
+ [ Serial ] - InitConfig: cluster=tidb-test, user=tidb, host=192.168.59.29, path=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache/pd-2379.service, deploy_dir=/tidb-deploy/pd-2379, data_dir=[/tidb-data/pd-2379], log_dir=/tidb-deploy/pd-2379/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache
+ [ Serial ] - InitConfig: cluster=tidb-test, user=tidb, host=192.168.59.27, path=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache/pd-2379.service, deploy_dir=/tidb-deploy/pd-2379, data_dir=[/tidb-data/pd-2379], log_dir=/tidb-deploy/pd-2379/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache
+ [ Serial ] - InitConfig: cluster=tidb-test, user=tidb, host=192.168.59.28, path=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache/pd-2379.service, deploy_dir=/tidb-deploy/pd-2379, data_dir=[/tidb-data/pd-2379], log_dir=/tidb-deploy/pd-2379/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache
+ [ Serial ] - InitConfig: cluster=tidb-test, user=tidb, host=192.168.59.27, path=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache/tikv-20160.service, deploy_dir=/tidb-deploy/tikv-20160, data_dir=[/tidb-data/tikv-20160], log_dir=/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache
+ [ Serial ] - InitConfig: cluster=tidb-test, user=tidb, host=192.168.59.29, path=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache/tikv-20160.service, deploy_dir=/tidb-deploy/tikv-20160, data_dir=[/tidb-data/tikv-20160], log_dir=/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache
+ [ Serial ] - InitConfig: cluster=tidb-test, user=tidb, host=192.168.59.24, path=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache/tikv-20160.service, deploy_dir=/tidb-deploy/tikv-20160, data_dir=[/tidb-data/tikv-20160], log_dir=/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache
+ [ Serial ] - InitConfig: cluster=tidb-test, user=tidb, host=192.168.59.27, path=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache/tidb-4000.service, deploy_dir=/tidb-deploy/tidb-4000, data_dir=[], log_dir=/tidb-deploy/tidb-4000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache
+ [ Serial ] - InitConfig: cluster=tidb-test, user=tidb, host=192.168.59.28, path=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache/tidb-4000.service, deploy_dir=/tidb-deploy/tidb-4000, data_dir=[], log_dir=/tidb-deploy/tidb-4000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache
+ [ Serial ] - InitConfig: cluster=tidb-test, user=tidb, host=192.168.59.29, path=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache/tidb-4000.service, deploy_dir=/tidb-deploy/tidb-4000, data_dir=[], log_dir=/tidb-deploy/tidb-4000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache
+ [ Serial ] - InitConfig: cluster=tidb-test, user=tidb, host=192.168.59.26, path=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache/prometheus-9090.service, deploy_dir=/tidb-deploy/prometheus-9090, data_dir=[/tidb-data/prometheus-9090], log_dir=/tidb-deploy/prometheus-9090/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache
+ [ Serial ] - InitConfig: cluster=tidb-test, user=tidb, host=192.168.59.26, path=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache/grafana-3000.service, deploy_dir=/tidb-deploy/grafana-3000, data_dir=[], log_dir=/tidb-deploy/grafana-3000/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache
+ [ Serial ] - InitConfig: cluster=tidb-test, user=tidb, host=192.168.59.26, path=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache/alertmanager-9093.service, deploy_dir=/tidb-deploy/alertmanager-9093, data_dir=[/tidb-data/alertmanager-9093], log_dir=/tidb-deploy/alertmanager-9093/log, cache_dir=/root/.tiup/storage/cluster/clusters/tidb-test/config-cache
+ [ Serial ] - SystemCtl: host=192.168.59.26 action=reload prometheus-9090.service
+ [ Serial ] - UpdateTopology: cluster=tidb-test
Scaled cluster `tidb-test` out successfully
[root@node1 ~]# tiup cluster display tidb-test
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.1/tiup-cluster display tidb-test
Cluster type:       tidb
Cluster name:       tidb-test
Cluster version:    v5.2.1
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://192.168.59.29:2379/dashboard
ID                   Role          Host           Ports        OS/Arch       Status  Data Dir                      Deploy Dir
--                   ----          ----           -----        -------       ------  --------                      ----------
192.168.59.26:9093   alertmanager  192.168.59.26  9093/9094    linux/x86_64  Up      /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
192.168.59.26:3000   grafana       192.168.59.26  3000         linux/x86_64  Up      -                             /tidb-deploy/grafana-3000
192.168.59.27:2379   pd            192.168.59.27  2379/2380    linux/x86_64  Up      /tidb-data/pd-2379            /tidb-deploy/pd-2379
192.168.59.28:2379   pd            192.168.59.28  2379/2380    linux/x86_64  Up|L    /tidb-data/pd-2379            /tidb-deploy/pd-2379
192.168.59.29:2379   pd            192.168.59.29  2379/2380    linux/x86_64  Up|UI   /tidb-data/pd-2379            /tidb-deploy/pd-2379
192.168.59.26:9090   prometheus    192.168.59.26  9090         linux/x86_64  Up      /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
192.168.59.27:4000   tidb          192.168.59.27  4000/10080   linux/x86_64  Up      -                             /tidb-deploy/tidb-4000
192.168.59.28:4000   tidb          192.168.59.28  4000/10080   linux/x86_64  Up      -                             /tidb-deploy/tidb-4000
192.168.59.29:4000   tidb          192.168.59.29  4000/10080   linux/x86_64  Up      -                             /tidb-deploy/tidb-4000
192.168.59.24:20160  tikv          192.168.59.24  20160/20180  linux/x86_64  Up      /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
192.168.59.27:20160  tikv          192.168.59.27  20160/20180  linux/x86_64  Up      /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
192.168.59.28:20160  tikv          192.168.59.28  20160/20180  linux/x86_64  Up      /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
192.168.59.29:20160  tikv          192.168.59.29  20160/20180  linux/x86_64  Up      /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
Total nodes: 13

  • 使用tiup cluster scale-in命令进行缩容,tiup cluster scale-in执行完成后需要根据提示再执行tiup cluster prune tidb-test,彻底清理节点信息


[root@node1 ~]# tiup cluster scale-in tidb-test --node 192.168.59.24:20160
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.1/tiup-cluster scale-in tidb-test --node 192.168.59.24:20160
This operation will delete the 192.168.59.24:20160 nodes in `tidb-test` and all their data.
Do you want to continue? [y/N]:(default=N) y
Scale-in nodes...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.28
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.28
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.29
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.24
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.29
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.28
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.29
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.27
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.26
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.26
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.27
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.27
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.26
+ [ Serial ] - ClusterOperate: operation=ScaleInOperation, options={Roles:[] Nodes:[192.168.59.24:20160] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:300 IgnoreConfigCheck:false NativeSSH:false SSHType: Concurrency:5 SSHProxyHost: SSHProxyPort:22 SSHProxyUser:root SSHProxyIdentity:/root/.ssh/id_rsa SSHProxyUsePassword:false SSHProxyTimeout:5 CleanupData:false CleanupLog:false RetainDataRoles:[] RetainDataNodes:[] ShowUptime:false JSON:false Operation:StartOperation}
The component `tikv` will become tombstone, maybe exists in several minutes or hours, after that you can use the prune command to clean it
+ [ Serial ] - UpdateMeta: cluster=tidb-test, deleted=`''`
+ [ Serial ] - UpdateTopology: cluster=tidb-test
+ Refresh instance configs
  - Regenerate config pd -> 192.168.59.27:2379 ... Done
  - Regenerate config pd -> 192.168.59.28:2379 ... Done
  - Regenerate config pd -> 192.168.59.29:2379 ... Done
  - Regenerate config tikv -> 192.168.59.27:20160 ... Done
  - Regenerate config tikv -> 192.168.59.28:20160 ... Done
  - Regenerate config tikv -> 192.168.59.29:20160 ... Done
  - Regenerate config tidb -> 192.168.59.27:4000 ... Done
  - Regenerate config tidb -> 192.168.59.28:4000 ... Done
  - Regenerate config tidb -> 192.168.59.29:4000 ... Done
  - Regenerate config prometheus -> 192.168.59.26:9090 ... Done
  - Regenerate config grafana -> 192.168.59.26:3000 ... Done
  - Regenerate config alertmanager -> 192.168.59.26:9093 ... Done
+ [ Serial ] - SystemCtl: host=192.168.59.26 action=reload prometheus-9090.service
Scaled cluster `tidb-test` in successfully
[root@node1 ~]# tiup cluster display tidb-test
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.1/tiup-cluster display tidb-test
Cluster type:       tidb
Cluster name:       tidb-test
Cluster version:    v5.2.1
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://192.168.59.29:2379/dashboard
ID                   Role          Host           Ports        OS/Arch       Status     Data Dir                      Deploy Dir
--                   ----          ----           -----        -------       ------     --------                      ----------
192.168.59.26:9093   alertmanager  192.168.59.26  9093/9094    linux/x86_64  Up         /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
192.168.59.26:3000   grafana       192.168.59.26  3000         linux/x86_64  Up         -                             /tidb-deploy/grafana-3000
192.168.59.27:2379   pd            192.168.59.27  2379/2380    linux/x86_64  Up         /tidb-data/pd-2379            /tidb-deploy/pd-2379
192.168.59.28:2379   pd            192.168.59.28  2379/2380    linux/x86_64  Up|L       /tidb-data/pd-2379            /tidb-deploy/pd-2379
192.168.59.29:2379   pd            192.168.59.29  2379/2380    linux/x86_64  Up|UI      /tidb-data/pd-2379            /tidb-deploy/pd-2379
192.168.59.26:9090   prometheus    192.168.59.26  9090         linux/x86_64  Up         /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
192.168.59.27:4000   tidb          192.168.59.27  4000/10080   linux/x86_64  Up         -                             /tidb-deploy/tidb-4000
192.168.59.28:4000   tidb          192.168.59.28  4000/10080   linux/x86_64  Up         -                             /tidb-deploy/tidb-4000
192.168.59.29:4000   tidb          192.168.59.29  4000/10080   linux/x86_64  Up         -                             /tidb-deploy/tidb-4000
192.168.59.24:20160  tikv          192.168.59.24  20160/20180  linux/x86_64  Tombstone  /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
192.168.59.27:20160  tikv          192.168.59.27  20160/20180  linux/x86_64  Up         /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
192.168.59.28:20160  tikv          192.168.59.28  20160/20180  linux/x86_64  Up         /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
192.168.59.29:20160  tikv          192.168.59.29  20160/20180  linux/x86_64  Up         /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
Total nodes: 13
There are some nodes can be pruned:
        Nodes: [192.168.59.24:20160]
        You can destroy them with the command: `tiup cluster prune tidb-test`
[root@node1 ~]# tiup cluster prune tidb-test
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.1/tiup-cluster prune tidb-test
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.28
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.29
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.24
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.27
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.28
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.29
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.26
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.26
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.26
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.27
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.28
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.29
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.27
+ [ Serial ] - FindTomestoneNodes
Will destroy these nodes: [192.168.59.24:20160]
Do you confirm this action? [y/N]:(default=N) y
Start destroy Tombstone nodes: [192.168.59.24:20160] ...
+ [ Serial ] - ClusterOperate: operation=DestroyTombstoneOperation, options={Roles:[] Nodes:[] Force:false SSHTimeout:5 OptTimeout:120 APITimeout:300 IgnoreConfigCheck:false NativeSSH:false SSHType: Concurrency:5 SSHProxyHost: SSHProxyPort:22 SSHProxyUser:root SSHProxyIdentity:/root/.ssh/id_rsa SSHProxyUsePassword:false SSHProxyTimeout:5 CleanupData:false CleanupLog:false RetainDataRoles:[] RetainDataNodes:[] ShowUptime:false JSON:false Operation:StartOperation}
Stopping component tikv
        Stopping instance 192.168.59.24
        Stop tikv 192.168.59.24:20160 success
Destroying component tikv
Destroying instance 192.168.59.24
Destroy 192.168.59.24 success
- Destroy tikv paths: [/tidb-data/tikv-20160 /tidb-deploy/tikv-20160/log /tidb-deploy/tikv-20160 /etc/systemd/system/tikv-20160.service]
Stopping component node_exporter
        Stopping instance 192.168.59.24
        Stop 192.168.59.24 success
Stopping component blackbox_exporter
        Stopping instance 192.168.59.24
        Stop 192.168.59.24 success
Destroying monitored 192.168.59.24
        Destroying instance 192.168.59.24
Destroy monitored on 192.168.59.24 success
Delete public key 192.168.59.24
Delete public key 192.168.59.24 success
+ [ Serial ] - UpdateMeta: cluster=tidb-test, deleted=`'192.168.59.24:20160'`
+ [ Serial ] - UpdateTopology: cluster=tidb-test
+ Refresh instance configs
  - Regenerate config pd -> 192.168.59.27:2379 ... Done
  - Regenerate config pd -> 192.168.59.28:2379 ... Done
  - Regenerate config pd -> 192.168.59.29:2379 ... Done
  - Regenerate config tikv -> 192.168.59.27:20160 ... Done
  - Regenerate config tikv -> 192.168.59.28:20160 ... Done
  - Regenerate config tikv -> 192.168.59.29:20160 ... Done
  - Regenerate config tidb -> 192.168.59.27:4000 ... Done
  - Regenerate config tidb -> 192.168.59.28:4000 ... Done
  - Regenerate config tidb -> 192.168.59.29:4000 ... Done
  - Regenerate config prometheus -> 192.168.59.26:9090 ... Done
  - Regenerate config grafana -> 192.168.59.26:3000 ... Done
  - Regenerate config alertmanager -> 192.168.59.26:9093 ... Done
+ [ Serial ] - SystemCtl: host=192.168.59.26 action=reload prometheus-9090.service
Destroy success
[root@node1 ~]# tiup cluster display tidb-test
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.1/tiup-cluster display tidb-test
Cluster type:       tidb
Cluster name:       tidb-test
Cluster version:    v5.2.1
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://192.168.59.29:2379/dashboard
ID                   Role          Host           Ports        OS/Arch       Status  Data Dir                      Deploy Dir
--                   ----          ----           -----        -------       ------  --------                      ----------
192.168.59.26:9093   alertmanager  192.168.59.26  9093/9094    linux/x86_64  Up      /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
192.168.59.26:3000   grafana       192.168.59.26  3000         linux/x86_64  Up      -                             /tidb-deploy/grafana-3000
192.168.59.27:2379   pd            192.168.59.27  2379/2380    linux/x86_64  Up      /tidb-data/pd-2379            /tidb-deploy/pd-2379
192.168.59.28:2379   pd            192.168.59.28  2379/2380    linux/x86_64  Up|L    /tidb-data/pd-2379            /tidb-deploy/pd-2379
192.168.59.29:2379   pd            192.168.59.29  2379/2380    linux/x86_64  Up|UI   /tidb-data/pd-2379            /tidb-deploy/pd-2379
192.168.59.26:9090   prometheus    192.168.59.26  9090         linux/x86_64  Up      /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
192.168.59.27:4000   tidb          192.168.59.27  4000/10080   linux/x86_64  Up      -                             /tidb-deploy/tidb-4000
192.168.59.28:4000   tidb          192.168.59.28  4000/10080   linux/x86_64  Up      -                             /tidb-deploy/tidb-4000
192.168.59.29:4000   tidb          192.168.59.29  4000/10080   linux/x86_64  Up      -                             /tidb-deploy/tidb-4000
192.168.59.27:20160  tikv          192.168.59.27  20160/20180  linux/x86_64  Up      /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
192.168.59.28:20160  tikv          192.168.59.28  20160/20180  linux/x86_64  Up      /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
192.168.59.29:20160  tikv          192.168.59.29  20160/20180  linux/x86_64  Up      /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
Total nodes: 12

三、TiDB群集重命名

  • 使用tiup cluster rename命令对群集tidb-test重命名

[root@node1 ~]# tiup cluster rename tidb-test tidb-prod
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.1/tiup-cluster rename tidb-test tidb-prod
Will rename the cluster name from tidb-test to tidb-prod.
Do you confirm this action? [y/N]:(default=N) y
Rename cluster `tidb-test` -> `tidb-prod` successfully
Will reload the cluster tidb-prod with restart policy is true, nodes: , roles: grafana,prometheus.
Do you want to continue? [y/N]:(default=N) y
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-prod/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-prod/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.28
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.28
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.27
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.29
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.28
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.29
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.26
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.26
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.29
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.26
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.27
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.27
+ [ Serial ] - UpdateTopology: cluster=tidb-prod
+ Refresh instance configs
  - Regenerate config pd -> 192.168.59.27:2379 ... Done
  - Regenerate config pd -> 192.168.59.28:2379 ... Done
  - Regenerate config pd -> 192.168.59.29:2379 ... Done
  - Regenerate config tikv -> 192.168.59.27:20160 ... Done
  - Regenerate config tikv -> 192.168.59.28:20160 ... Done
  - Regenerate config tikv -> 192.168.59.29:20160 ... Done
  - Regenerate config tidb -> 192.168.59.27:4000 ... Done
  - Regenerate config tidb -> 192.168.59.28:4000 ... Done
  - Regenerate config tidb -> 192.168.59.29:4000 ... Done
  - Regenerate config prometheus -> 192.168.59.26:9090 ... Done
  - Regenerate config grafana -> 192.168.59.26:3000 ... Done
  - Regenerate config alertmanager -> 192.168.59.26:9093 ... Done
+ Refresh monitor configs
  - Refresh config node_exporter -> 192.168.59.27 ... Done
  - Refresh config node_exporter -> 192.168.59.28 ... Done
  - Refresh config node_exporter -> 192.168.59.29 ... Done
  - Refresh config node_exporter -> 192.168.59.26 ... Done
  - Refresh config blackbox_exporter -> 192.168.59.27 ... Done
  - Refresh config blackbox_exporter -> 192.168.59.28 ... Done
  - Refresh config blackbox_exporter -> 192.168.59.29 ... Done
  - Refresh config blackbox_exporter -> 192.168.59.26 ... Done
+ [ Serial ] - UpgradeCluster
Upgrading component prometheus
        Restarting instance 192.168.59.26:9090
        Restart instance 192.168.59.26:9090 success
Upgrading component grafana
        Restarting instance 192.168.59.26:3000
        Restart instance 192.168.59.26:3000 success
Reloaded cluster `tidb-prod` successfully
[root@node1 ~]# tiup cluster list
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.1/tiup-cluster list
Name       User  Version  Path                                            PrivateKey
----       ----  -------  ----                                            ----------
tidb-prod  tidb  v5.2.1   /root/.tiup/storage/cluster/clusters/tidb-prod  /root/.tiup/storage/cluster/clusters/tidb-prod/ssh/id_rsa
[root@node1 ~]# tiup cluster display tidb-test
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.1/tiup-cluster display tidb-test

Error: Cluster tidb-test not found

Verbose debug logs has been written to /root/.tiup/logs/tiup-cluster-debug-2021-11-04-11-57-10.log.
Error: run `/root/.tiup/components/cluster/v1.6.1/tiup-cluster` (wd:/root/.tiup/data/SnlRqcu) failed: exit status 1
[root@node1 ~]# tiup cluster display tidb-prod
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.1/tiup-cluster display tidb-prod
Cluster type:       tidb
Cluster name:       tidb-prod
Cluster version:    v5.2.1
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://192.168.59.29:2379/dashboard
ID                   Role          Host           Ports        OS/Arch       Status  Data Dir                      Deploy Dir
--                   ----          ----           -----        -------       ------  --------                      ----------
192.168.59.26:9093   alertmanager  192.168.59.26  9093/9094    linux/x86_64  Up      /tidb-data/alertmanager-9093  /tidb-deploy/alertmanager-9093
192.168.59.26:3000   grafana       192.168.59.26  3000         linux/x86_64  Up      -                             /tidb-deploy/grafana-3000
192.168.59.27:2379   pd            192.168.59.27  2379/2380    linux/x86_64  Up      /tidb-data/pd-2379            /tidb-deploy/pd-2379
192.168.59.28:2379   pd            192.168.59.28  2379/2380    linux/x86_64  Up|L    /tidb-data/pd-2379            /tidb-deploy/pd-2379
192.168.59.29:2379   pd            192.168.59.29  2379/2380    linux/x86_64  Up|UI   /tidb-data/pd-2379            /tidb-deploy/pd-2379
192.168.59.26:9090   prometheus    192.168.59.26  9090         linux/x86_64  Up      /tidb-data/prometheus-9090    /tidb-deploy/prometheus-9090
192.168.59.27:4000   tidb          192.168.59.27  4000/10080   linux/x86_64  Up      -                             /tidb-deploy/tidb-4000
192.168.59.28:4000   tidb          192.168.59.28  4000/10080   linux/x86_64  Up      -                             /tidb-deploy/tidb-4000
192.168.59.29:4000   tidb          192.168.59.29  4000/10080   linux/x86_64  Up      -                             /tidb-deploy/tidb-4000
192.168.59.27:20160  tikv          192.168.59.27  20160/20180  linux/x86_64  Up      /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
192.168.59.28:20160  tikv          192.168.59.28  20160/20180  linux/x86_64  Up      /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
192.168.59.29:20160  tikv          192.168.59.29  20160/20180  linux/x86_64  Up      /tidb-data/tikv-20160         /tidb-deploy/tikv-20160
Total nodes: 12


「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

文章被以下合辑收录

评论