修改IP后免密认证
ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.86.45
ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.86.46
ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.86.47
ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.86.48
ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.86.49
ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.86.50
ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.86.51
ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.86.52
ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.86.53
ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.86.54
ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.86.55
修改文件:topology.yaml
4、更新 TiUP 和 TiUP cluster 组件至最新版本
tiup update --self && tiup update cluster
download https://tiup-mirrors.pingcap.com/tiup-v1.11.1-linux-amd64.tar.gz 6.92 MiB / 6.92 MiB 100.00% 10.30 MiB/s
Updated successfully!
download https://tiup-mirrors.pingcap.com/cluster-v1.11.1-linux-amd64.tar.gz 8.44 MiB / 8.44 MiB 100.00% 6.72 MiB/s
Updated successfully!
验证当前 TiUP cluster 版本信息
查看 TiUP cluster 组件版本:
[tidb@TiFlash ~]$ tiup --binary cluster
/home/tidb/.tiup/components/cluster/v1.11.1/tiup-cluster
生成集群初始化配置文件的模板: (已经生成且完成IP变更)
[root@jiekexu1 ~]# tiup cluster template > topology.yaml
7、检查和自动修复集群存在的潜在风险
[root@jiekexu1 ~]# tiup cluster check ./topology.yaml --user root -p -i /home/root/.ssh/gcp_rsa
[tidb@TiFlash ~]$ tiup cluster check ./topology.yaml --user root -p -i /home/root/.ssh/gcp_rsa
tiup is checking updates for component cluster ...
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.11.1/tiup-cluster check ./topology.yaml --user root -p -i /home/root/.ssh/gcp_rsa
Input SSH password:
...
Node Check Result Message
---- ----- ------ -------
172.16.86.54 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009
172.16.86.54 cpu-cores Pass number of CPU cores / threads: 4
172.16.86.54 cpu-governor Warn Unable to determine current CPU frequency governor policy
172.16.86.54 memory Pass memory size is 8192MB
172.16.86.54 exist Fail /tidb-deploy/alertmanager-9093/log already exists
172.16.86.54 exist Fail /etc/systemd/system/prometheus-9090.service already exists
172.16.86.54 exist Fail /etc/systemd/system/alertmanager-9093.service already exists
172.16.86.54 exist Fail /tidb-deploy/grafana-3000/log already exists
172.16.86.54 exist Fail /tidb-deploy/prometheus-9090/log already exists
172.16.86.54 exist Fail /tidb-deploy/alertmanager-9093 already exists
172.16.86.54 exist Fail /tidb-data/alertmanager-9093 already exists
172.16.86.54 exist Fail /tidb-data/prometheus-9090 already exists
172.16.86.54 exist Fail /etc/systemd/system/grafana-3000.service already exists
172.16.86.54 exist Fail /tidb-deploy/prometheus-9090 already exists
172.16.86.54 exist Fail /tidb-deploy/grafana-3000 already exists
172.16.86.54 selinux Pass SELinux is disabled
172.16.86.54 thp Pass THP is disabled
172.16.86.54 listening-port Fail port 9090 is already in use
172.16.86.54 listening-port Fail port 3000 is already in use
172.16.86.54 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai
172.16.86.54 swap Warn swap is enabled, please disable it for best performance
172.16.86.54 network Pass network speed of ens160 is 10000MB
172.16.86.54 disk Warn mount point / does not have 'noatime' option set
172.16.86.54 command Pass numactl: policy: default
172.16.86.55 network Pass network speed of ens160 is 10000MB
172.16.86.55 selinux Pass SELinux is disabled
172.16.86.55 command Pass numactl: policy: default
172.16.86.55 cpu-cores Pass number of CPU cores / threads: 32
172.16.86.55 cpu-governor Warn Unable to determine current CPU frequency governor policy
172.16.86.55 memory Pass memory size is 65536MB
172.16.86.55 swap Warn swap is enabled, please disable it for best performance
172.16.86.55 disk Warn mount point / does not have 'noatime' option set
172.16.86.55 thp Pass THP is disabled
172.16.86.55 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai
172.16.86.55 exist Fail /tidb-deploy/tiflash-9000/log already exists
172.16.86.55 exist Fail /etc/systemd/system/tiflash-9000.service already exists
172.16.86.55 exist Fail /tidb-deploy/tiflash-9000 already exists
172.16.86.55 exist Fail /tidb-data/tiflash-9000 already exists
172.16.86.55 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009
172.16.86.51 cpu-cores Pass number of CPU cores / threads: 16
172.16.86.51 swap Warn swap is enabled, please disable it for best performance
172.16.86.51 memory Pass memory size is 32768MB
172.16.86.51 network Pass network speed of ens160 is 10000MB
172.16.86.51 disk Warn mount point / does not have 'noatime' option set
172.16.86.51 selinux Pass SELinux is disabled
172.16.86.51 exist Fail /tidb-deploy/tikv-20160 already exists
172.16.86.51 exist Fail /tidb-data/tikv-20160 already exists
172.16.86.51 exist Fail /tidb-deploy/tikv-20160/log already exists
172.16.86.51 exist Fail /etc/systemd/system/tikv-20160.service already exists
172.16.86.51 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009
172.16.86.51 thp Pass THP is disabled
172.16.86.51 command Pass numactl: policy: default
172.16.86.51 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai
172.16.86.51 cpu-governor Warn Unable to determine current CPU frequency governor policy
172.16.86.46 cpu-cores Pass number of CPU cores / threads: 16
172.16.86.46 cpu-governor Warn Unable to determine current CPU frequency governor policy
172.16.86.46 swap Warn swap is enabled, please disable it for best performance
172.16.86.46 selinux Pass SELinux is disabled
172.16.86.46 thp Pass THP is disabled
172.16.86.46 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai
172.16.86.46 exist Fail /tidb-deploy/tidb-4000 already exists
172.16.86.46 exist Fail /tidb-deploy/tidb-4000/log already exists
172.16.86.46 exist Fail /etc/systemd/system/tidb-4000.service already exists
172.16.86.46 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009
172.16.86.46 memory Pass memory size is 32768MB
172.16.86.46 network Pass network speed of ens160 is 10000MB
172.16.86.46 command Pass numactl: policy: default
172.16.86.47 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai
172.16.86.47 exist Fail /etc/systemd/system/tidb-4000.service already exists
172.16.86.47 exist Fail /tidb-deploy/tidb-4000 already exists
172.16.86.47 exist Fail /tidb-deploy/tidb-4000/log already exists
172.16.86.47 cpu-governor Warn Unable to determine current CPU frequency governor policy
172.16.86.47 swap Warn swap is enabled, please disable it for best performance
172.16.86.47 memory Pass memory size is 32768MB
172.16.86.47 network Pass network speed of ens160 is 10000MB
172.16.86.47 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009
172.16.86.47 cpu-cores Pass number of CPU cores / threads: 16
172.16.86.47 selinux Pass SELinux is disabled
172.16.86.47 thp Pass THP is disabled
172.16.86.47 command Pass numactl: policy: default
172.16.86.49 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai
172.16.86.49 exist Fail /tidb-data/pd-2379 already exists
172.16.86.49 exist Fail /tidb-deploy/pd-2379/log already exists
172.16.86.49 exist Fail /etc/systemd/system/pd-2379.service already exists
172.16.86.49 exist Fail /tidb-deploy/pd-2379 already exists
172.16.86.49 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009
172.16.86.49 cpu-governor Warn Unable to determine current CPU frequency governor policy
172.16.86.49 selinux Pass SELinux is disabled
172.16.86.49 command Pass numactl: policy: default
172.16.86.49 listening-port Fail port 2380 is already in use
172.16.86.49 listening-port Fail port 2379 is already in use
172.16.86.49 cpu-cores Pass number of CPU cores / threads: 4
172.16.86.49 swap Warn swap is enabled, please disable it for best performance
172.16.86.49 memory Pass memory size is 8192MB
172.16.86.49 network Pass network speed of ens160 is 10000MB
172.16.86.49 disk Warn mount point / does not have 'noatime' option set
172.16.86.49 thp Pass THP is disabled
172.16.86.50 swap Warn swap is enabled, please disable it for best performance
172.16.86.50 memory Pass memory size is 8192MB
172.16.86.50 disk Warn mount point / does not have 'noatime' option set
172.16.86.50 listening-port Fail port 2379 is already in use
172.16.86.50 listening-port Fail port 2380 is already in use
172.16.86.50 cpu-governor Warn Unable to determine current CPU frequency governor policy
172.16.86.50 network Pass network speed of ens160 is 10000MB
172.16.86.50 selinux Pass SELinux is disabled
172.16.86.50 thp Pass THP is disabled
172.16.86.50 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai
172.16.86.50 exist Fail /tidb-deploy/pd-2379/log already exists
172.16.86.50 exist Fail /etc/systemd/system/pd-2379.service already exists
172.16.86.50 exist Fail /tidb-deploy/pd-2379 already exists
172.16.86.50 exist Fail /tidb-data/pd-2379 already exists
172.16.86.50 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009
172.16.86.50 cpu-cores Pass number of CPU cores / threads: 4
172.16.86.50 command Pass numactl: policy: default
172.16.86.53 cpu-cores Pass number of CPU cores / threads: 16
172.16.86.53 memory Pass memory size is 32768MB
172.16.86.53 disk Warn mount point / does not have 'noatime' option set
172.16.86.53 thp Pass THP is disabled
172.16.86.53 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai
172.16.86.53 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009
172.16.86.53 swap Warn swap is enabled, please disable it for best performance
172.16.86.53 network Pass network speed of ens160 is 10000MB
172.16.86.53 selinux Pass SELinux is disabled
172.16.86.53 command Pass numactl: policy: default
172.16.86.53 exist Fail /tidb-deploy/tikv-20160 already exists
172.16.86.53 exist Fail /tidb-data/tikv-20160 already exists
172.16.86.53 exist Fail /tidb-deploy/tikv-20160/log already exists
172.16.86.53 exist Fail /etc/systemd/system/tikv-20160.service already exists
172.16.86.53 cpu-governor Warn Unable to determine current CPU frequency governor policy
172.16.86.45 network Pass network speed of ens160 is 10000MB
172.16.86.45 selinux Pass SELinux is disabled
172.16.86.45 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai
172.16.86.45 exist Fail /etc/systemd/system/tidb-4000.service already exists
172.16.86.45 exist Fail /tidb-deploy/tidb-4000 already exists
172.16.86.45 exist Fail /tidb-deploy/tidb-4000/log already exists
172.16.86.45 swap Warn swap is enabled, please disable it for best performance
172.16.86.45 memory Pass memory size is 32768MB
172.16.86.45 thp Pass THP is disabled
172.16.86.45 command Pass numactl: policy: default
172.16.86.45 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009
172.16.86.45 cpu-cores Pass number of CPU cores / threads: 16
172.16.86.45 cpu-governor Warn Unable to determine current CPU frequency governor policy
172.16.86.48 cpu-cores Pass number of CPU cores / threads: 4
172.16.86.48 cpu-governor Warn Unable to determine current CPU frequency governor policy
172.16.86.48 memory Pass memory size is 8192MB
172.16.86.48 selinux Pass SELinux is disabled
172.16.86.48 thp Pass THP is disabled
172.16.86.48 listening-port Fail port 2380 is already in use
172.16.86.48 listening-port Fail port 2379 is already in use
172.16.86.48 exist Fail /etc/systemd/system/pd-2379.service already exists
172.16.86.48 exist Fail /tidb-deploy/pd-2379 already exists
172.16.86.48 exist Fail /tidb-data/pd-2379 already exists
172.16.86.48 exist Fail /tidb-deploy/pd-2379/log already exists
172.16.86.48 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009
172.16.86.48 swap Warn swap is enabled, please disable it for best performance
172.16.86.48 network Pass network speed of ens160 is 10000MB
172.16.86.48 disk Warn mount point / does not have 'noatime' option set
172.16.86.48 command Pass numactl: policy: default
172.16.86.52 disk Warn mount point / does not have 'noatime' option set
172.16.86.52 command Pass numactl: policy: default
172.16.86.52 exist Fail /tidb-deploy/tikv-20160 already exists
172.16.86.52 exist Fail /tidb-data/tikv-20160 already exists
172.16.86.52 exist Fail /tidb-deploy/tikv-20160/log already exists
172.16.86.52 exist Fail /etc/systemd/system/tikv-20160.service already exists
172.16.86.52 cpu-cores Pass number of CPU cores / threads: 16
172.16.86.52 cpu-governor Warn Unable to determine current CPU frequency governor policy
172.16.86.52 swap Warn swap is enabled, please disable it for best performance
172.16.86.52 memory Pass memory size is 32768MB
172.16.86.52 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai
172.16.86.52 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009
172.16.86.52 network Pass network speed of ens160 is 10000MB
172.16.86.52 selinux Pass SELinux is disabled
172.16.86.52 thp Pass THP is disabled
文件路径已经存在,端口被占用。
登录各节点服务器删除数据文件 /tidb-deploy/、/tidb-data/
[root@TIDB-node-1 ~]# cd /tidb-deploy/
[root@TIDB-node-1 tidb-deploy]# ll
drwxr-xr-x 6 tidb tidb 55 12月 30 14:58 monitor-9100
drwxr-xr-x 6 tidb tidb 55 12月 30 14:57 tidb-4000
[root@TIDB-node-1 tidb-deploy]# rm -rf *
[root@TIDB-node-1 tidb-deploy]#
[root@TIDB-node-1 ~]# cd /tidb-data/
[root@TIDB-node-1 tidb-deploy]# rm -rf *
重新验证
rm
172.16.86.45 exist Fail /etc/systemd/system/tidb-4000.service already exists
172.16.86.46 exist Fail /etc/systemd/system/tidb-4000.service already exists
172.16.86.47 exist Fail /etc/systemd/system/tidb-4000.service already exists
172.16.86.48 exist Fail /etc/systemd/system/pd-2379.service already exists
172.16.86.49 exist Fail /etc/systemd/system/pd-2379.service already exists
172.16.86.50 exist Fail /etc/systemd/system/pd-2379.service already exists
172.16.86.51 exist Fail /etc/systemd/system/tikv-20160.service already exists
172.16.86.52 exist Fail /etc/systemd/system/tikv-20160.service already exists
172.16.86.53 exist Fail /etc/systemd/system/tikv-20160.service already exists
172.16.86.54 exist Fail /etc/systemd/system/alertmanager-9093.service already exists
172.16.86.54 exist Fail /etc/systemd/system/grafana-3000.service already exists
172.16.86.54 exist Fail /etc/systemd/system/prometheus-9090.service already exists
172.16.86.55 exist Fail /etc/systemd/system/tiflash-9000.service already exists
172.16.86.54 listening-port Fail port 9090 is already in use
172.16.86.54 listening-port Fail port 3000 is already in use
重启grafana、prometheus服务、删除数据目录 /tidb-deploy/、/tidb-data/中文件后,
####################################################################################################
然后根据前面第七步check 结果手动删除各个节点/tidb-deploy 和 /tidb-data 目录,还有对应的服务,再重新检查,继续部署。
rm -rf /etc/systemd/system/tikv-20160.service
rm -rf /etc/systemd/system/grafana-3000.service
rm -rf /etc/systemd/system/prometheus-9090.service
rm -rf /etc/systemd/system/alertmanager-9093.service
####################################################################################################
检查成功。
[tidb@TiFlash data]$ tiup cluster list
tiup is checking updates for component cluster ...
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.11.1/tiup-cluster list
Name User Version Path PrivateKey
---- ---- ------- ---- ----------
jiekexu-tidb tidb v6.5.0 /home/tidb/.tiup/storage/cluster/clusters/jiekexu-tidb /home/tidb/.tiup/storage/cluster/clusters/jiekexu-tidb/ssh/id_rsa
rokin-tidb tidb v6.5.0 /home/tidb/.tiup/storage/cluster/clusters/rokin-tidb /home/tidb/.tiup/storage/cluster/clusters/rokin-tidb/ssh/id_rsa
删除 /home/tidb/.tiup/storage/cluster/clusters/目录中文件,删除tiup cluster list 历史集群信息
tiup历史命令记录:/home/tidb/.tiup/history/tiup-history-0
8、查看 TiDB 支持的最新版本
[root@jiekexu1 ~]# tiup list tidb
v6.5.0 2022-12-29T11:32:06+08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64
v6.6.0-alpha-nightly-20230102 2023-01-02T22:35:14+08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64
9、部署 TiDB 集群
[root@jiekexu1 ~]# tiup cluster deploy rokintidb1 v6.5.0 ./topology.yaml --user root -p
...
Enable 172.16.86.46 success
Enable 172.16.86.53 success
Enable 172.16.86.51 success
Cluster `rokintidb1` deployed successfully, you can start it with command: `tiup cluster start rokintidb1 --init`
10、查看 TiUP 管理的集群情况
tiup cluster list
[tidb@TiFlash ~]$ tiup cluster list
tiup is checking updates for component cluster ...timeout! #超时
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.11.1/tiup-cluster list
Name User Version Path PrivateKey
---- ---- ------- ---- ----------
rokintidb1 tidb v6.5.0 /home/tidb/.tiup/storage/cluster/clusters/rokintidb1 /home/tidb/.tiup/storage/cluster/clusters/rokintidb1/ssh/id_rsa
[tidb@TiFlash ~]$
[tidb@TiFlash ~]$ tiup update --all #超时问题处理--升级tiup
component cluster version v1.11.1 is already installed
Updated successfully!
[tidb@TiFlash ~]$ tiup cluster list
tiup is checking updates for component cluster ... #超时问题处理--升级tiup后问题解决
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.11.1/tiup-cluster list
Name User Version Path PrivateKey
---- ---- ------- ---- ----------
rokintidb1 tidb v6.5.0 /home/tidb/.tiup/storage/cluster/clusters/rokintidb1 /home/tidb/.tiup/storage/cluster/clusters/rokintidb1/ssh/id_rsa
[tidb@TiFlash ~]$
11、检查 rokintidb1 集群情况
tiup cluster display rokintidb1
[tidb@TiFlash ~]$ tiup cluster display rokintidb1
tiup is checking updates for component cluster ...
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.11.1/tiup-cluster display rokintidb1
Cluster type: tidb
Cluster name: rokintidb1
Cluster version: v6.5.0
Deploy user: tidb
SSH type: builtin
Grafana URL: http://172.16.86.54:3000
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
172.16.86.54:9093 alertmanager 172.16.86.54 9093/9094 linux/x86_64 Down /tidb-data/alertmanager-9093 /tidb-deploy/alertmanager-9093
172.16.86.54:3000 grafana 172.16.86.54 3000 linux/x86_64 Down - /tidb-deploy/grafana-3000
172.16.86.48:2379 pd 172.16.86.48 2379/2380 linux/x86_64 Down /tidb-data/pd-2379 /tidb-deploy/pd-2379
172.16.86.49:2379 pd 172.16.86.49 2379/2380 linux/x86_64 Down /tidb-data/pd-2379 /tidb-deploy/pd-2379
172.16.86.50:2379 pd 172.16.86.50 2379/2380 linux/x86_64 Down /tidb-data/pd-2379 /tidb-deploy/pd-2379
172.16.86.54:9090 prometheus 172.16.86.54 9090/12020 linux/x86_64 Down /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090
172.16.86.45:4000 tidb 172.16.86.45 4000/10080 linux/x86_64 Down - /tidb-deploy/tidb-4000
172.16.86.46:4000 tidb 172.16.86.46 4000/10080 linux/x86_64 Down - /tidb-deploy/tidb-4000
172.16.86.47:4000 tidb 172.16.86.47 4000/10080 linux/x86_64 Down - /tidb-deploy/tidb-4000
172.16.86.55:9000 tiflash 172.16.86.55 9000/8123/3930/20170/20292/8234 linux/x86_64 N/A /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000
172.16.86.51:20160 tikv 172.16.86.51 20160/20180 linux/x86_64 N/A /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
172.16.86.52:20160 tikv 172.16.86.52 20160/20180 linux/x86_64 N/A /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
172.16.86.53:20160 tikv 172.16.86.53 20160/20180 linux/x86_64 N/A /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
Total nodes: 13
[tidb@TiFlash ~]$
12、使用 init 安全启动集群
tiup cluster start rokintidb1 --init
安全启动是 TiUP cluster 从 v1.9.0 起引入的一种新的启动方式,采用该方式启动数据库可以提高数据库安全性。安全启动后,TiUP 会自动生成 TiDB root 用户的密码,并在命令行界面返回密码。使用安全启动方式后,不能通过无密码的 root 用户登录数据库,需要记录命令行返回的密码进行后续操作。该自动生成的密码只会返回一次,如果没有记录或者忘记该密码,需参照忘记 root 密码的方法修改密码。当然也可以使用普通启动tiup cluster start rokintidb1。这样是不需要 root 密码即可登录数据库的,推荐使用安全启动。
[tidb@TiFlash ~]$ tiup cluster start rokintidb1 --init
tiup is checking updates for component cluster ...
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.11.1/tiup-cluster start rokintidb1 --init
Starting cluster rokintidb1...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/rokintidb1/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/rokintidb1/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=172.16.86.52
+ [Parallel] - UserSSH: user=tidb, host=172.16.86.49
+ [Parallel] - UserSSH: user=tidb, host=172.16.86.50
+ [Parallel] - UserSSH: user=tidb, host=172.16.86.46
+ [Parallel] - UserSSH: user=tidb, host=172.16.86.47
+ [Parallel] - UserSSH: user=tidb, host=172.16.86.51
+ [Parallel] - UserSSH: user=tidb, host=172.16.86.54
+ [Parallel] - UserSSH: user=tidb, host=172.16.86.54
+ [Parallel] - UserSSH: user=tidb, host=172.16.86.48
+ [Parallel] - UserSSH: user=tidb, host=172.16.86.54
+ [Parallel] - UserSSH: user=tidb, host=172.16.86.45
+ [Parallel] - UserSSH: user=tidb, host=172.16.86.55
+ [Parallel] - UserSSH: user=tidb, host=172.16.86.53
+ [ Serial ] - StartCluster
Starting component pd
Starting instance 172.16.86.50:2379
Starting instance 172.16.86.48:2379
Starting instance 172.16.86.49:2379
Start instance 172.16.86.50:2379 success
Start instance 172.16.86.49:2379 success
Start instance 172.16.86.48:2379 success
Starting component tikv
Starting instance 172.16.86.53:20160
Starting instance 172.16.86.52:20160
Starting instance 172.16.86.51:20160
Start instance 172.16.86.53:20160 success
Start instance 172.16.86.51:20160 success
Start instance 172.16.86.52:20160 success
Starting component tidb
Starting instance 172.16.86.46:4000
Starting instance 172.16.86.45:4000
Starting instance 172.16.86.47:4000
Start instance 172.16.86.47:4000 success
Start instance 172.16.86.46:4000 success
Start instance 172.16.86.45:4000 success
Starting component tiflash
Starting instance 172.16.86.55:9000
Start instance 172.16.86.55:9000 success
Starting component prometheus
Starting instance 172.16.86.54:9090
Start instance 172.16.86.54:9090 success
Starting component grafana
Starting instance 172.16.86.54:3000
Start instance 172.16.86.54:3000 success
Starting component alertmanager
Starting instance 172.16.86.54:9093
Start instance 172.16.86.54:9093 success
Starting component node_exporter
Starting instance 172.16.86.51
Starting instance 172.16.86.46
Starting instance 172.16.86.52
Starting instance 172.16.86.54
Starting instance 172.16.86.47
Starting instance 172.16.86.50
Starting instance 172.16.86.53
Starting instance 172.16.86.48
Starting instance 172.16.86.45
Starting instance 172.16.86.55
Starting instance 172.16.86.49
Start 172.16.86.48 success
Start 172.16.86.49 success
Start 172.16.86.50 success
Start 172.16.86.54 success
Start 172.16.86.45 success
Start 172.16.86.52 success
Start 172.16.86.46 success
Start 172.16.86.47 success
Start 172.16.86.51 success
Start 172.16.86.53 success
Start 172.16.86.55 success
Starting component blackbox_exporter
Starting instance 172.16.86.51
Starting instance 172.16.86.50
Starting instance 172.16.86.52
Starting instance 172.16.86.46
Starting instance 172.16.86.54
Starting instance 172.16.86.48
Starting instance 172.16.86.55
Starting instance 172.16.86.45
Starting instance 172.16.86.49
Starting instance 172.16.86.53
Starting instance 172.16.86.47
Start 172.16.86.49 success
Start 172.16.86.48 success
Start 172.16.86.52 success
Start 172.16.86.54 success
Start 172.16.86.45 success
Start 172.16.86.51 success
Start 172.16.86.53 success
Start 172.16.86.47 success
Start 172.16.86.46 success
Start 172.16.86.50 success
Start 172.16.86.55 success
+ [ Serial ] - UpdateTopology: cluster=rokintidb1
Started cluster `rokintidb1` successfully
The root password of TiDB database has been changed.
The new password is: '63rXnp&0te7%v98E+@'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.
根据上图 PD、TiKV、tiflash、TiDB、Prometheus、Grafana 等启动完成,集群启动完成,并初始化完成,'63rXnp&0te7%v98E+@' 显示出 root 用户的密码。
检查集群状态
tiup cluster display rokintidb1
[tidb@TiFlash ~]$ tiup cluster display rokintidb1
tiup is checking updates for component cluster ...
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.11.1/tiup-cluster display rokintidb1
Cluster type: tidb
Cluster name: rokintidb1
Cluster version: v6.5.0
Deploy user: tidb
SSH type: builtin
Dashboard URL: http://172.16.86.50:2379/dashboard
Grafana URL: http://172.16.86.54:3000
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
172.16.86.54:9093 alertmanager 172.16.86.54 9093/9094 linux/x86_64 Up /tidb-data/alertmanager-9093 /tidb-deploy/alertmanager-9093
172.16.86.54:3000 grafana 172.16.86.54 3000 linux/x86_64 Up - /tidb-deploy/grafana-3000
172.16.86.48:2379 pd 172.16.86.48 2379/2380 linux/x86_64 Up /tidb-data/pd-2379 /tidb-deploy/pd-2379
172.16.86.49:2379 pd 172.16.86.49 2379/2380 linux/x86_64 Up|L /tidb-data/pd-2379 /tidb-deploy/pd-2379
172.16.86.50:2379 pd 172.16.86.50 2379/2380 linux/x86_64 Up|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379
172.16.86.54:9090 prometheus 172.16.86.54 9090/12020 linux/x86_64 Up /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090
172.16.86.45:4000 tidb 172.16.86.45 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
172.16.86.46:4000 tidb 172.16.86.46 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
172.16.86.47:4000 tidb 172.16.86.47 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
172.16.86.55:9000 tiflash 172.16.86.55 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000
172.16.86.51:20160 tikv 172.16.86.51 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
172.16.86.52:20160 tikv 172.16.86.52 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
172.16.86.53:20160 tikv 172.16.86.53 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
Total nodes: 13
[tidb@TiFlash ~]$
启动、关闭集群命令
tiup cluster start rokintidb1
tiup cluster stop rokintidb1
补充:重新安装 TiUP
[root@jiekexu1 ~]# tiup uninstall --self
Remove directory '/root/.tiup/bin' successfully!
Remove directory '/root/.tiup/manifest' successfully!
Remove directory '/root/.tiup/manifests' successfully!
Remove directory '/root/.tiup/components' successfully!
Remove directory '/root/.tiup/storage/cluster/packages' successfully!
Uninstalled TiUP successfully! (User data reserved, you can delete '/root/.tiup' manually if you confirm userdata useless)
[root@jiekexu1 ~]# curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
tiup cluster
tiup update --self && tiup update cluster




