1.更新tiup为最新版本
因为之前安装了 v1.9.4 tiup版本,首先我先更新了tiup的版本到 v1.15.1
[tidb@tidbser1 ~]$ tiup update --self
download https://tiup-mirrors.pingcap.com/tiup-v1.15.1-linux-amd64.tar.gz 4.98 MiB / 4.98 MiB 100.00% 2.38 MiB/s
Updated successfully!
[tidb@tidbser1 ~]$ tiup update cluster
download https://tiup-mirrors.pingcap.com/cluster-v1.15.1-linux-amd64.tar.gz 8.76 MiB / 8.76 MiB 100.00% 3.94 MiB/s
Updated successfully!
或者使用下面的命令更新所有的 components
[tidb@tidbser1 packages]$ tiup update --all
component cluster version v1.15.1 is already installed
download https://tiup-mirrors.pingcap.com/dm-v1.15.1-linux-amd64.tar.gz 8.42 MiB / 8.42 MiB 100.00% 2.81 MiB/s
download https://tiup-mirrors.pingcap.com/dmctl-v8.1.0-linux-amd64.tar.gz 68.75 MiB / 68.75 MiB 100.00% 2.70 MiB/s
download https://tiup-mirrors.pingcap.com/pd-recover-v8.1.0-linux-amd64.tar.gz 14.64 MiB / 14.64 MiB 100.00% 2.64 MiB/s
Updated successfully!
下载的文件保存在 /home/tidb/.tiup/components/ 路径下面
[tidb@tidbser1 packages]$ ll /home/tidb/.tiup/components/
total 16
drwxr-xr-x 5 tidb tidb 4096 May 24 21:27 cluster
drwxr-xr-x 4 tidb tidb 4096 May 24 21:30 dm
drwxr-xr-x 4 tidb tidb 4096 May 24 21:31 dmctl
drwxr-xr-x 4 tidb tidb 4096 May 24 21:31 pd-recover
[tidb@tidbser1 packages]$
[tidb@tidbser1 packages]$ ll /home/tidb/.tiup/components/cluster/
total 12
drwxr-xr-x 2 tidb tidb 4096 May 24 21:27 v1.15.1
drwxr-xr-x 2 tidb tidb 4096 Jul 3 2021 v1.5.2
drwxr-xr-x 2 tidb tidb 4096 Apr 26 2022 v1.9.4
[tidb@tidbser1 packages]$ ll /home/tidb/.tiup/components/dm
total 8
drwxr-xr-x 2 tidb tidb 4096 Jun 15 2022 v1.10.1
drwxr-xr-x 2 tidb tidb 4096 May 24 21:30 v1.15.1
[tidb@tidbser1 packages]$
[tidb@tidbser1 packages]$ ll /home/tidb/.tiup/components/dmctl/
total 8
drwxr-xr-x 3 tidb tidb 4096 Jun 15 2022 v6.1.0
drwxr-xr-x 3 tidb tidb 4096 May 24 21:31 v8.1.0
[tidb@tidbser1 packages]$
[tidb@tidbser1 packages]$ ll /home/tidb/.tiup/components/pd-recover/
total 8
drwxr-xr-x 2 tidb tidb 4096 Apr 28 2022 v6.0.0
drwxr-xr-x 2 tidb tidb 4096 May 24 21:31 v8.1.0
[tidb@tidbser1 ~]$ tiup -v
1.15.0 tiup
Go Version: go1.21.9
Git Ref: v1.15.1
GitHash: 7f0e0fa8b7d5521075a9be2015eff442769fad47
2.部署tidb v7.5.0 集群
2.1.生成创建集群模板文件
[tidb@tidbser1 ~]$ tiup cluster template>topo.yaml
2.2.修改模板文件
[tidb@tidbser1 ~]$ cat /home/tidb/topo.yaml
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
# # The user who runs the tidb cluster.
user: "tidb"
# # group is used to specify the group name the user belong to if it's not the same as user.
# group: "tidb"
# # SSH port of servers in the managed cluster.
ssh_port: 22
# # Storage directory for cluster deployment files, startup scripts, and configuration files.
deploy_dir: "/u01/tidb-deploy"
# # TiDB Cluster data storage directory
data_dir: "/u01/tidb-data"
# # Supported values: "amd64", "arm64" (default: "amd64")
arch: "amd64"
# # Resource Control is used to limit the resource of an instance.
# # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html
# # Supports using instance-level `resource_control` to override global `resource_control`.
# resource_control:
# # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#MemoryLimit=bytes
# memory_limit: "2G"
# # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#CPUQuota=
# # The percentage specifies how much CPU time the unit shall get at maximum, relative to the total CPU time available on one CPU. Use values > 100% for allotting CPU time on more than one CPU.
# # Example: CPUQuota=200% ensures that the executed processes will never get more than two CPU time.
# cpu_quota: "200%"
# # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#IOReadBandwidthMax=device%20bytes
# io_read_bandwidth_max: "/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M"
# io_write_bandwidth_max: "/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M"
# # Monitored variables are applied to all the machines.
monitored:
# # The communication port for reporting system information of each node in the TiDB cluster.
node_exporter_port: 9100
# # Blackbox_exporter communication port, used for TiDB cluster port monitoring.
blackbox_exporter_port: 9115
# # Storage directory for deployment files, startup scripts, and configuration files of monitoring components.
deploy_dir: "/u01/tidb-deploy/monitored-9100"
# # Data storage directory of monitoring components.
data_dir: "/u01/tidb-data/monitored-9100"
# # Log storage directory of the monitoring component.
log_dir: "/u01/tidb-deploy/monitored-9100/log"
# # Server configs are used to specify the runtime configuration of TiDB components.
# # All configuration items can be found in TiDB docs:
# # - TiDB: https://pingcap.com/docs/stable/reference/configuration/tidb-server/configuration-file/
# # - TiKV: https://pingcap.com/docs/stable/reference/configuration/tikv-server/configuration-file/
# # - PD: https://pingcap.com/docs/stable/reference/configuration/pd-server/configuration-file/
# # - TiFlash: https://docs.pingcap.com/tidb/stable/tiflash-configuration
# #
# # All configuration items use points to represent the hierarchy, e.g:
# # readpool.storage.use-unified-pool
# # ^ ^
# # - example: https://github.com/pingcap/tiup/blob/master/examples/topology.example.yaml.
# # You can overwrite this configuration via the instance-level `config` field.
server_configs:
# tidb:
pd:
replication.location-labels: ["zone", "dc", "host"]
# tiflash:
# tiflash-learner:
# # Server configs are used to specify the configuration of PD Servers.
pd_servers:
# # The ip address of the PD Server.
- host: 192.168.40.62
# # SSH port of the server.
# # ssh_port: 22
# # PD Server name
name: "pd-1"
# # communication port for TiDB Servers to connect.
client_port: 2379
# # Communication port among PD Server nodes.
peer_port: 2380
# # PD Server deployment file, startup script, configuration file storage directory.
deploy_dir: "/u01/tidb-deploy/pd-2379"
# # PD Server data storage directory.
data_dir: "/u01/tidb-data/pd-2379"
# # PD Server log file storage directory.
log_dir: "/u01/tidb-deploy/pd-2379/log"
# # numa node bindings.
# numa_node: "0,1"
# # The following configs are used to overwrite the `server_configs.pd` values.
# config:
# schedule.max-merge-region-size: 20
# schedule.max-merge-region-keys: 200000
- host: 192.168.40.62
# # ssh_port: 22
name: "pd-2"
client_port: 2381
peer_port: 2382
deploy_dir: "/u01/tidb-deploy/pd-2381"
data_dir: "/u01/tidb-data/pd-2381"
log_dir: "/u01/tidb-deploy/pd-2381/log"
# numa_node: "0,1"
# config:
# schedule.max-merge-region-size: 20
# schedule.max-merge-region-keys: 200000
- host: 192.168.40.62
# # ssh_port: 22
name: "pd-3"
client_port: 2383
peer_port: 2384
deploy_dir: "/u01/tidb-deploy/pd-2383"
data_dir: "/u01/tidb-data/pd-2383"
log_dir: "/u01/tidb-deploy/pd-2383/log"
# numa_node: "0,1"
# config:
# schedule.max-merge-region-size: 20
# schedule.max-merge-region-keys: 200000
# # Server configs are used to specify the configuration of TiDB Servers.
tidb_servers:
# # The ip address of the TiDB Server.
- host: 192.168.40.62
# # SSH port of the server.
# # ssh_port: 22
# # The port for clients to access the TiDB cluster.
port: 4000
# # TiDB Server status API port.
status_port: 10080
# # TiDB Server deployment file, startup script, configuration file storage directory.
deploy_dir: "/u01/tidb-deploy/tidb-4000"
# # TiDB Server log file storage directory.
log_dir: "/u01/tidb-deploy/tidb-4000/log"
# # The ip address of the TiDB Server.
- host: 192.168.40.62
# # ssh_port: 22
port: 4001
status_port: 10081
deploy_dir: "/u01/tidb-deploy/tidb-4001"
log_dir: "/u01/tidb-deploy/tidb-4001/log"
- host: 192.168.40.62
# # ssh_port: 22
port: 4002
status_port: 10082
deploy_dir: "/u01/tidb-deploy/tidb-4002"
# log_dir: "/u01/tidb-deploy/tidb-4002/log"
# # Server configs are used to specify the configuration of TiKV Servers.
tikv_servers:
# # The ip address of the TiKV Server.
- host: 192.168.40.62
# # SSH port of the server.
# # ssh_port: 22
# # TiKV Server communication port.
port: 20160
# # TiKV Server status API port.
status_port: 20180
config:
server.labels: { zone: "zone1", dc: "dc1", host: "host1" }
# # TiKV Server deployment file, startup script, configuration file storage directory.
deploy_dir: "/u01/tidb-deploy/tikv-20160"
# # TiKV Server data storage directory.
data_dir: "/u01/tidb-data/tikv-20160"
# # TiKV Server log file storage directory.
log_dir: "/u01/tidb-deploy/tikv-20160/log"
# # The following configs are used to overwrite the `server_configs.tikv` values.
# config:
# log.level: warn
# # The ip address of the TiKV Server.
- host: 192.168.40.62
# # ssh_port: 22
port: 20161
status_port: 20181
config:
server.labels: { zone: "zone1", dc: "dc1", host: "host2" }
deploy_dir: "/u01/tidb-deploy/tikv-20161"
data_dir: "/u01/tidb-data/tikv-20161"
log_dir: "/u01/tidb-deploy/tikv-20161/log"
# config:
# log.level: warn
- host: 192.168.40.62
# # ssh_port: 22
port: 20162
status_port: 20182
config:
server.labels: { zone: "zone1", dc: "dc1", host: "host3" }
deploy_dir: "/u01/tidb-deploy/tikv-20162"
data_dir: "/u01/tidb-data/tikv-20162"
log_dir: "/u01/tidb-deploy/tikv-20162/log"
# config:
# log.level: warn
# # Server configs are used to specify the configuration of TiFlash Servers.
tiflash_servers:
# # The ip address of the TiFlash Server.
- host: 192.168.40.62
# # SSH port of the server.
# # ssh_port: 22
# # TiFlash TCP Service port.
tcp_port: 9000
# # TiFlash HTTP Service port.
http_port: 8123
# # TiFlash raft service and coprocessor service listening address.
flash_service_port: 3930
# # TiFlash Proxy service port.
flash_proxy_port: 20170
# # TiFlash Proxy metrics port.
flash_proxy_status_port: 20292
# # TiFlash metrics port.
metrics_port: 8234
# # TiFlash Server deployment file, startup script, configuration file storage directory.
deploy_dir: /u01/tidb-deploy/tiflash-9000
## With cluster version >= v4.0.9 and you want to deploy a multi-disk TiFlash node, it is recommended to
## check config.storage.* for details. The data_dir will be ignored if you defined those configurations.
## Setting data_dir to a ','-joined string is still supported but deprecated.
## Check https://docs.pingcap.com/tidb/stable/tiflash-configuration#multi-disk-deployment for more details.
# # TiFlash Server data storage directory.
data_dir: /u01/tidb-data/tiflash-9000
# # TiFlash Server log file storage directory.
log_dir: /u01/tidb-deploy/tiflash-9000/log
# # The ip address of the TiKV Server.
- host: 192.168.40.62
# # ssh_port: 22
tcp_port: 9001
http_port: 8124
flash_service_port: 3931
flash_proxy_port: 20171
flash_proxy_status_port: 20293
metrics_port: 8235
deploy_dir: /u01/tidb-deploy/tiflash-9001
data_dir: /u01/tidb-data/tiflash-9001
log_dir: /u01/tidb-deploy/tiflash-9001/log
# # Server configs are used to specify the configuration of Prometheus Server.
monitoring_servers:
# # The ip address of the Monitoring Server.
- host: 192.168.40.62
# # SSH port of the server.
# # ssh_port: 22
# # Prometheus Service communication port.
port: 9090
# # ng-monitoring servive communication port
ng_port: 12020
# # Prometheus deployment file, startup script, configuration file storage directory.
deploy_dir: "/u01/tidb-deploy/prometheus-8249"
# # Prometheus data storage directory.
data_dir: "/u01/tidb-data/prometheus-8249"
# # Prometheus log file storage directory.
log_dir: "/u01/tidb-deploy/prometheus-8249/log"
# # Server configs are used to specify the configuration of Grafana Servers.
grafana_servers:
# # The ip address of the Grafana Server.
- host: 192.168.40.62
# # Grafana web port (browser access)
port: 3000
# # Grafana deployment file, startup script, configuration file storage directory.
deploy_dir: /u01/tidb-deploy/grafana-3000
# # Server configs are used to specify the configuration of Alertmanager Servers.
alertmanager_servers:
# # The ip address of the Alertmanager Server.
- host: 192.168.40.62
# # SSH port of the server.
# # ssh_port: 22
# # Alertmanager web service port.
web_port: 9093
# # Alertmanager communication port.
cluster_port: 9094
# # Alertmanager deployment file, startup script, configuration file storage directory.
deploy_dir: "/u01/tidb-deploy/alertmanager-9093"
# # Alertmanager data storage directory.
data_dir: "/u01/tidb-data/alertmanager-9093"
# # Alertmanager log file storage directory.
log_dir: "/u01/tidb-deploy/alertmanager-9093/log"
2.3.部署集群
[tidb@tidbser1 ~]$ tiup cluster deploy test-cluster v7.5.0 ./topo.yaml
+ Detect CPU Arch
- Detecting node 192.168.40.62 ... Done
Please confirm your topology:
Cluster type: tidb
Cluster name: test-cluster
Cluster version: v7.5.0
Role Host Ports OS/Arch Directories
---- ---- ----- ------- -----------
pd 192.168.40.62 2379/2380 linux/x86_64 /u01/tidb-deploy/pd-2379,/u01/tidb-data/pd-2379
pd 192.168.40.62 2381/2382 linux/x86_64 /u01/tidb-deploy/pd-2381,/u01/tidb-data/pd-2381
pd 192.168.40.62 2383/2384 linux/x86_64 /u01/tidb-deploy/pd-2383,/u01/tidb-data/pd-2383
tikv 192.168.40.62 20160/20180 linux/x86_64 /u01/tidb-deploy/tikv-20160,/u01/tidb-data/tikv-20160
tikv 192.168.40.62 20161/20181 linux/x86_64 /u01/tidb-deploy/tikv-20161,/u01/tidb-data/tikv-20161
tikv 192.168.40.62 20162/20182 linux/x86_64 /u01/tidb-deploy/tikv-20162,/u01/tidb-data/tikv-20162
tidb 192.168.40.62 4000/10080 linux/x86_64 /u01/tidb-deploy/tidb-4000
tidb 192.168.40.62 4001/10081 linux/x86_64 /u01/tidb-deploy/tidb-4001
tidb 192.168.40.62 4002/10082 linux/x86_64 /u01/tidb-deploy/tidb-4002
tiflash 192.168.40.62 9000/8123/3930/20170/20292/8234 linux/x86_64 /u01/tidb-deploy/tiflash-9000,/u01/tidb-data/tiflash-9000
tiflash 192.168.40.62 9001/8124/3931/20171/20293/8235 linux/x86_64 /u01/tidb-deploy/tiflash-9001,/u01/tidb-data/tiflash-9001
prometheus 192.168.40.62 9090/12020 linux/x86_64 /u01/tidb-deploy/prometheus-8249,/u01/tidb-data/prometheus-8249
grafana 192.168.40.62 3000 linux/x86_64 /u01/tidb-deploy/grafana-3000
alertmanager 192.168.40.62 9093/9094 linux/x86_64 /u01/tidb-deploy/alertmanager-9093,/u01/tidb-data/alertmanager-9093
Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ Generate SSH keys ... Done
+ Download TiDB components
- Download pd:v7.5.0 (linux/amd64) ... Done
- Download tikv:v7.5.0 (linux/amd64) ... Done
- Download tidb:v7.5.0 (linux/amd64) ... Done
- Download tiflash:v7.5.0 (linux/amd64) ... Done
- Download prometheus:v7.5.0 (linux/amd64) ... Done
- Download grafana:v7.5.0 (linux/amd64) ... Done
- Download alertmanager: (linux/amd64) ... Done
- Download node_exporter: (linux/amd64) ... Done
- Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments
- Prepare 192.168.40.62:22 ... Done
+ Deploy TiDB instance
- Copy pd -> 192.168.40.62 ... Done
- Copy pd -> 192.168.40.62 ... Done
- Copy pd -> 192.168.40.62 ... Done
- Copy tikv -> 192.168.40.62 ... Done
- Copy tikv -> 192.168.40.62 ... Done
- Copy tikv -> 192.168.40.62 ... Done
- Copy tidb -> 192.168.40.62 ... Done
- Copy tidb -> 192.168.40.62 ... Done
- Copy tidb -> 192.168.40.62 ... Done
- Copy tiflash -> 192.168.40.62 ... Done
- Copy tiflash -> 192.168.40.62 ... Done
- Copy prometheus -> 192.168.40.62 ... Done
- Copy grafana -> 192.168.40.62 ... Done
- Copy alertmanager -> 192.168.40.62 ... Done
- Deploy node_exporter -> 192.168.40.62 ... Done
- Deploy blackbox_exporter -> 192.168.40.62 ... Done
+ Copy certificate to remote host
+ Init instance configs
- Generate config pd -> 192.168.40.62:2379 ... Done
- Generate config pd -> 192.168.40.62:2381 ... Done
- Generate config pd -> 192.168.40.62:2383 ... Done
- Generate config tikv -> 192.168.40.62:20160 ... Done
- Generate config tikv -> 192.168.40.62:20161 ... Done
- Generate config tikv -> 192.168.40.62:20162 ... Done
- Generate config tidb -> 192.168.40.62:4000 ... Done
- Generate config tidb -> 192.168.40.62:4001 ... Done
- Generate config tidb -> 192.168.40.62:4002 ... Done
- Generate config tiflash -> 192.168.40.62:9000 ... Done
- Generate config tiflash -> 192.168.40.62:9001 ... Done
- Generate config prometheus -> 192.168.40.62:9090 ... Done
- Generate config grafana -> 192.168.40.62:3000 ... Done
- Generate config alertmanager -> 192.168.40.62:9093 ... Done
+ Init monitor configs
- Generate config node_exporter -> 192.168.40.62 ... Done
- Generate config blackbox_exporter -> 192.168.40.62 ... Done
+ Check status
Enabling component pd
Enabling instance 192.168.40.62:2379
Enabling instance 192.168.40.62:2383
Enabling instance 192.168.40.62:2381
Enable instance 192.168.40.62:2379 success
Enable instance 192.168.40.62:2383 success
Enable instance 192.168.40.62:2381 success
Enabling component tikv
Enabling instance 192.168.40.62:20162
Enabling instance 192.168.40.62:20160
Enabling instance 192.168.40.62:20161
Enable instance 192.168.40.62:20162 success
Enable instance 192.168.40.62:20160 success
Enable instance 192.168.40.62:20161 success
Enabling component tidb
Enabling instance 192.168.40.62:4002
Enabling instance 192.168.40.62:4000
Enabling instance 192.168.40.62:4001
Enable instance 192.168.40.62:4002 success
Enable instance 192.168.40.62:4001 success
Enable instance 192.168.40.62:4000 success
Enabling component tiflash
Enabling instance 192.168.40.62:9001
Enabling instance 192.168.40.62:9000
Enable instance 192.168.40.62:9001 success
Enable instance 192.168.40.62:9000 success
Enabling component prometheus
Enabling instance 192.168.40.62:9090
Enable instance 192.168.40.62:9090 success
Enabling component grafana
Enabling instance 192.168.40.62:3000
Enable instance 192.168.40.62:3000 success
Enabling component alertmanager
Enabling instance 192.168.40.62:9093
Enable instance 192.168.40.62:9093 success
Enabling component node_exporter
Enabling instance 192.168.40.62
Enable 192.168.40.62 success
Enabling component blackbox_exporter
Enabling instance 192.168.40.62
Enable 192.168.40.62 success
Cluster `test-cluster` deployed successfully, you can start it with command: `tiup cluster start test-cluster --init`
2.4.启动集群
[tidb@tidbser1 ~]$ tiup cluster start test-cluster --init
Starting cluster test-cluster...
+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/test-cluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.40.62
+ [Parallel] - UserSSH: user=tidb, host=192.168.40.62
+ [Parallel] - UserSSH: user=tidb, host=192.168.40.62
+ [Parallel] - UserSSH: user=tidb, host=192.168.40.62
+ [Parallel] - UserSSH: user=tidb, host=192.168.40.62
+ [Parallel] - UserSSH: user=tidb, host=192.168.40.62
+ [Parallel] - UserSSH: user=tidb, host=192.168.40.62
+ [Parallel] - UserSSH: user=tidb, host=192.168.40.62
+ [Parallel] - UserSSH: user=tidb, host=192.168.40.62
+ [Parallel] - UserSSH: user=tidb, host=192.168.40.62
+ [Parallel] - UserSSH: user=tidb, host=192.168.40.62
+ [Parallel] - UserSSH: user=tidb, host=192.168.40.62
+ [Parallel] - UserSSH: user=tidb, host=192.168.40.62
+ [Parallel] - UserSSH: user=tidb, host=192.168.40.62
+ [ Serial ] - StartCluster
Starting component pd
Starting instance 192.168.40.62:2381
Starting instance 192.168.40.62:2379
Starting instance 192.168.40.62:2383
Start instance 192.168.40.62:2381 success
Start instance 192.168.40.62:2379 success
Start instance 192.168.40.62:2383 success
Starting component tikv
Starting instance 192.168.40.62:20162
Starting instance 192.168.40.62:20161
Starting instance 192.168.40.62:20160
Start instance 192.168.40.62:20160 success
Start instance 192.168.40.62:20162 success
Start instance 192.168.40.62:20161 success
Starting component tidb
Starting instance 192.168.40.62:4002
Starting instance 192.168.40.62:4000
Starting instance 192.168.40.62:4001
Start instance 192.168.40.62:4002 success
Start instance 192.168.40.62:4000 success
Start instance 192.168.40.62:4001 success
Starting component tiflash
Starting instance 192.168.40.62:9001
Starting instance 192.168.40.62:9000
Start instance 192.168.40.62:9001 success
Start instance 192.168.40.62:9000 success
Starting component prometheus
Starting instance 192.168.40.62:9090
Start instance 192.168.40.62:9090 success
Starting component grafana
Starting instance 192.168.40.62:3000
Start instance 192.168.40.62:3000 success
Starting component alertmanager
Starting instance 192.168.40.62:9093
Start instance 192.168.40.62:9093 success
Starting component node_exporter
Starting instance 192.168.40.62
Start 192.168.40.62 success
Starting component blackbox_exporter
Starting instance 192.168.40.62
Start 192.168.40.62 success
+ [ Serial ] - UpdateTopology: cluster=test-cluster
Started cluster `test-cluster` successfully
The root password of TiDB database has been changed.
The new password is: 'W2=1$0ZtY+X7V6#Sp5'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.
2.5.检查集群运行情况
[tidb@tidbser1 ~]$ tiup cluster display test-cluster
Checking updates for component cluster... Timedout (after 2s)
Cluster type: tidb
Cluster name: test-cluster
Cluster version: v7.5.0
Deploy user: tidb
SSH type: builtin
Dashboard URL: http://192.168.40.62:2381/dashboard
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
192.168.40.62:9093 alertmanager 192.168.40.62 9093/9094 linux/x86_64 Up /u01/tidb-data/alertmanager-9093 /u01/tidb-deploy/alertmanager-9093
192.168.40.62:3000 grafana 192.168.40.62 3000 linux/x86_64 Up - /u01/tidb-deploy/grafana-3000
192.168.40.62:2379 pd 192.168.40.62 2379/2380 linux/x86_64 Up /u01/tidb-data/pd-2379 /u01/tidb-deploy/pd-2379
192.168.40.62:2381 pd 192.168.40.62 2381/2382 linux/x86_64 Up|L|UI /u01/tidb-data/pd-2381 /u01/tidb-deploy/pd-2381
192.168.40.62:2383 pd 192.168.40.62 2383/2384 linux/x86_64 Up /u01/tidb-data/pd-2383 /u01/tidb-deploy/pd-2383
192.168.40.62:9090 prometheus 192.168.40.62 9090/12020 linux/x86_64 Up /u01/tidb-data/prometheus-8249 /u01/tidb-deploy/prometheus-8249
192.168.40.62:4000 tidb 192.168.40.62 4000/10080 linux/x86_64 Up - /u01/tidb-deploy/tidb-4000
192.168.40.62:4001 tidb 192.168.40.62 4001/10081 linux/x86_64 Up - /u01/tidb-deploy/tidb-4001
192.168.40.62:4002 tidb 192.168.40.62 4002/10082 linux/x86_64 Up - /u01/tidb-deploy/tidb-4002
192.168.40.62:9000 tiflash 192.168.40.62 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /u01/tidb-data/tiflash-9000 /u01/tidb-deploy/tiflash-9000
192.168.40.62:9001 tiflash 192.168.40.62 9001/8124/3931/20171/20293/8235 linux/x86_64 Up /u01/tidb-data/tiflash-9001 /u01/tidb-deploy/tiflash-9001
192.168.40.62:20160 tikv 192.168.40.62 20160/20180 linux/x86_64 Up /u01/tidb-data/tikv-20160 /u01/tidb-deploy/tikv-20160
192.168.40.62:20161 tikv 192.168.40.62 20161/20181 linux/x86_64 Up /u01/tidb-data/tikv-20161 /u01/tidb-deploy/tikv-20161
192.168.40.62:20162 tikv 192.168.40.62 20162/20182 linux/x86_64 Up /u01/tidb-data/tikv-20162 /u01/tidb-deploy/tikv-20162
Total nodes: 14
3.相关报错及处理
3.1.tikv在同一台主机部署需要指定 location label
报错如下:
[tidb@tidbser1 ~]$ tiup cluster deploy test-cluster v7.5.0 ./topo.yaml
Error: check TiKV label failed, please fix that before continue:
192.168.40.62:20160:
multiple TiKV instances are deployed at the same host but location label missing
192.168.40.62:20161:
multiple TiKV instances are deployed at the same host but location label missing
192.168.40.62:20162:
multiple TiKV instances are deployed at the same host but location label missing
解决方式:
topo.yaml文件添加
config:
server.labels: { zone: "zone1", dc: "dc1", host: "host1" }
config:
server.labels: { zone: "zone1", dc: "dc1", host: "host2" }
config:
server.labels: { zone: "zone1", dc: "dc1", host: "host3" }
3.2.cannot start any token
报错如下:
[tidb@tidbser1 ~]$ tiup cluster deploy test-cluster v7.5.0 ./topo.yaml
Error: Failed to parse topology file ./topo.yaml (topology.parse_failed)
caused by: yaml: line 151: found character that cannot start any token
Please check the syntax of your topology file ./topo.yaml and try again.
解决方式:
topo.yaml文件中使用了tab键开头的字符,改成4个空格键解决
3.3.replication.location-labels not config in pd
报错如下:
[tidb@tidbser1 ~]$ tiup cluster deploy test-cluster v7.5.0 ./topo.yaml
Error: check TiKV label failed, please fix that before continue:
192.168.40.62:20160:
label name 'dc' is not specified in pd config (replication.location-labels: [])
label name 'host' is not specified in pd config (replication.location-labels: [])
label name 'zone' is not specified in pd config (replication.location-labels: [])
192.168.40.62:20161:
label name 'dc' is not specified in pd config (replication.location-labels: [])
label name 'host' is not specified in pd config (replication.location-labels: [])
label name 'zone' is not specified in pd config (replication.location-labels: [])
192.168.40.62:20162:
label name 'dc' is not specified in pd config (replication.location-labels: [])
label name 'host' is not specified in pd config (replication.location-labels: [])
label name 'zone' is not specified in pd config (replication.location-labels: [])
解决方式:
topo.yaml文件添加
server_configs:
# tidb:
pd:
replication.location-labels: ["zone", "dc", "host"]
3.4.SSH_AUTH_SOCK specified(tui.id_read_failed)
报错如下:
[tidb@tidbser1 ~]$ tiup cluster deploy test-cluster v7.5.0 ./topo.yaml
Error: failed to fetch cpu arch: executor.ssh.execute_failed: Failed to execute command over SSH for 'tidb@192.168.40.62:22' {ssh_stderr: , ssh_stdout: , ssh_command: export LANG=C; PATH=$PATH:/bin:/sbin:/usr/bin:/usr/sbin /usr/bin/sudo -H bash -c "uname -m"}, cause: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
解决方式:
重新生成 id_rsa.pub 后,用新生成的 id_rsa.pub 内容更新authorized_keys文件
[tidb@tidbser1 ~]$ ssh-keygen
[tidb@tidbser1 ~]$ cd .ssh
[tidb@tidbser1 .ssh]$ vi authorized_keys
最后修改时间:2024-05-27 10:05:55
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。




