一、部署规划
| IP | 操作系统 | 部署服务 |
|---|---|---|
| 192.168.59.26 | CentOS 7.6.1810 | Grafana、Alertmanager |
| 192.168.59.27 | CentOS 7.6.1810 | PD、TiKV、TiDB |
| 192.168.59.28 | CentOS 7.6.1810 | PD、TiKV、TiDB |
| 192.168.59.29 | CentOS 7.6.1810 | PD、TiKV、TiDB |
二、操作系统参数修改
如下shell脚本(tidbPrepare.sh)修改操作系统参数
#!/bin/bash
## 1. 关闭防火墙
systemctl disable firewalld.service
systemctl stop firewalld.service
systemctl status firewalld.service
## 2. 关闭SELinux
sed -i '/^SELINUX=/d' /etc/selinux/config
echo "SELINUX=disabled" >> /etc/selinux/config
cat /etc/selinux/config|grep "SELINUX=disabled"
## 3. 关闭SWAP分区
echo "vm.swappiness = 0">> /etc/sysctl.conf
swapoff -a && swapon -a
sysctl -p
sed -i '/swap/s/^/#/' /etc/fstab
swapoff -a
free -m
## 4. 配置SSH服务,关闭Banner,允许root远程登录
sed -i '/Banner/s/^/#/' /etc/ssh/sshd_config
sed -i '/PermitRootLogin/s/^/#/' /etc/ssh/sshd_config
echo -e "\n" >> /etc/ssh/sshd_config
echo "Banner none " >> /etc/ssh/sshd_config
echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
cat /etc/ssh/sshd_config |grep -v ^#|grep -E 'PermitRoot|Banner'
## 5. 配置 sysctl.conf 和 performance.sh
cat >> /etc/sysctl.conf << EOF
net.ipv4.tcp_retries1 = 5
net.ipv4.tcp_syn_retries = 5
net.sctp.path_max_retrans = 10
net.sctp.max_init_retransmits = 10
EOF
echo "fs.file-max = 1000000">> /etc/sysctl.conf
echo "net.core.somaxconn = 32768">> /etc/sysctl.conf
echo "net.ipv4.tcp_tw_recycle = 0">> /etc/sysctl.conf
echo "net.ipv4.tcp_syncookies = 0">> /etc/sysctl.conf
echo "vm.overcommit_memory = 1">> /etc/sysctl.conf
sysctl -p
## 6. 配置资源限制
echo "* soft stack 3072" >> /etc/security/limits.conf
echo "* hard stack 3072" >> /etc/security/limits.conf
echo "* soft nofile 1000000" >> /etc/security/limits.conf
echo "* hard nofile 1000000" >> /etc/security/limits.conf
echo "* soft nproc unlimited" >> /etc/security/limits.d/90-nproc.conf
cat << EOF >>/etc/security/limits.conf
tidb soft nofile 1000000
tidb hard nofile 1000000
tidb soft stack 32768
tidb hard stack 32768
EOF
tail -n 4 /etc/security/limits.conf
tail -n 1 /etc/security/limits.d/90-nproc.conf
## 7. 关闭透明大页[Only for CentOS]
cat >>/etc/rc.d/rc.local<<EOF
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi
if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi
EOF
chmod +x /etc/rc.d/rc.local /usr/bin/sh /etc/rc.d/rc.local
cat /sys/kernel/mm/transparent_hugepage/enabled
cat /sys/kernel/mm/transparent_hugepage/defrag
## 8.检查及安装NTP、irqbalance、numactl服务
yum install ntp ntpdate numactl irqbalance -y
systemctl start ntpd.service
systemctl start irqbalance.service
systemctl enable ntpd.service
systemctl start irqbalance.service
systemctl enable irqbalance.service
systemctl status irqbalance.service 此脚本必须在所有节点服务器以root用户执行,执行成功后必须重启操作系统,如下为其中一台执行结果:
[root@node2 ~]# sh tidbPrepare.sh
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
Oct 21 14:41:30 node2 systemd[1]: Starting firewalld - dynamic firewall daemon...
Oct 21 14:41:40 node2 systemd[1]: Started firewalld - dynamic firewall daemon.
Oct 21 14:51:11 node2 systemd[1]: Stopping firewalld - dynamic firewall daemon...
Oct 21 14:51:11 node2 systemd[1]: Stopped firewalld - dynamic firewall daemon.
SELINUX=disabled
vm.swappiness = 0
total used free shared buff/cache available
Mem: 4411 115 4138 8 158 4077
Swap: 0 0 0
Banner none
PermitRootLogin yes
vm.swappiness = 0
net.ipv4.tcp_retries1 = 5
net.ipv4.tcp_syn_retries = 5
sysctl: cannot stat /proc/sys/net/sctp/path_max_retrans: No such file or directory
sysctl: cannot stat /proc/sys/net/sctp/max_init_retransmits: No such file or directory
fs.file-max = 1000000
net.core.somaxconn = 32768
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_syncookies = 0
vm.overcommit_memory = 1
tidb soft nofile 1000000
tidb hard nofile 1000000
tidb soft stack 32768
tidb hard stack 32768
* soft nproc unlimited
[always] madvise never
[always] madvise never
Loaded plugins: fastestmirror
Determining fastest mirrors
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
base | 3.6 kB 00:00:00
extras | 2.9 kB 00:00:00
updates | 2.9 kB 00:00:00
(1/2): extras/7/x86_64/primary_db | 243 kB 00:00:00
(2/2): updates/7/x86_64/primary_db | 12 MB 00:00:01
Resolving Dependencies
--> Running transaction check
---> Package irqbalance.x86_64 3:1.0.7-11.el7 will be updated
---> Package irqbalance.x86_64 3:1.0.7-12.el7 will be an update
---> Package ntp.x86_64 0:4.2.6p5-29.el7.centos.2 will be installed
--> Processing Dependency: libopts.so.25()(64bit) for package: ntp-4.2.6p5-29.el7.centos.2.x86_64
---> Package ntpdate.x86_64 0:4.2.6p5-29.el7.centos.2 will be installed
---> Package numactl.x86_64 0:2.0.12-5.el7 will be installed
--> Running transaction check
---> Package autogen-libopts.x86_64 0:5.18-5.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=================================================================================================================================================================================================================
Package Arch Version Repository Size
=================================================================================================================================================================================================================
Installing:
ntp x86_64 4.2.6p5-29.el7.centos.2 base 549 k
ntpdate x86_64 4.2.6p5-29.el7.centos.2 base 87 k
numactl x86_64 2.0.12-5.el7 base 66 k
Updating:
irqbalance x86_64 3:1.0.7-12.el7 base 45 k
Installing for dependencies:
autogen-libopts x86_64 5.18-5.el7 base 66 k
Transaction Summary
=================================================================================================================================================================================================================
Install 3 Packages (+1 Dependent package)
Upgrade 1 Package
Total download size: 812 k
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
(1/5): ntp-4.2.6p5-29.el7.centos.2.x86_64.rpm | 549 kB 00:00:00
(2/5): irqbalance-1.0.7-12.el7.x86_64.rpm | 45 kB 00:00:00
(3/5): autogen-libopts-5.18-5.el7.x86_64.rpm | 66 kB 00:00:00
(4/5): numactl-2.0.12-5.el7.x86_64.rpm | 66 kB 00:00:00
(5/5): ntpdate-4.2.6p5-29.el7.centos.2.x86_64.rpm | 87 kB 00:00:00
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 3.0 MB/s | 812 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : ntpdate-4.2.6p5-29.el7.centos.2.x86_64 1/6
Installing : autogen-libopts-5.18-5.el7.x86_64 2/6
Installing : ntp-4.2.6p5-29.el7.centos.2.x86_64 3/6
Installing : numactl-2.0.12-5.el7.x86_64 4/6
Updating : 3:irqbalance-1.0.7-12.el7.x86_64 5/6
Cleanup : 3:irqbalance-1.0.7-11.el7.x86_64 6/6
Verifying : 3:irqbalance-1.0.7-12.el7.x86_64 1/6
Verifying : autogen-libopts-5.18-5.el7.x86_64 2/6
Verifying : ntp-4.2.6p5-29.el7.centos.2.x86_64 3/6
Verifying : numactl-2.0.12-5.el7.x86_64 4/6
Verifying : ntpdate-4.2.6p5-29.el7.centos.2.x86_64 5/6
Verifying : 3:irqbalance-1.0.7-11.el7.x86_64 6/6
Installed:
ntp.x86_64 0:4.2.6p5-29.el7.centos.2 ntpdate.x86_64 0:4.2.6p5-29.el7.centos.2 numactl.x86_64 0:2.0.12-5.el7
Dependency Installed:
autogen-libopts.x86_64 0:5.18-5.el7
Updated:
irqbalance.x86_64 3:1.0.7-12.el7
Complete!
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
● irqbalance.service - irqbalance daemon
Loaded: loaded (/usr/lib/systemd/system/irqbalance.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-10-21 14:51:27 CST; 426ms ago
Main PID: 5895 (irqbalance)
CGroup: /system.slice/irqbalance.service
└─5895 /usr/sbin/irqbalance --foreground
Oct 21 14:51:27 node2 systemd[1]: Stopped irqbalance daemon.
Oct 21 14:51:27 node2 systemd[1]: Started irqbalance daemon.三、在线安装TiUP
根据官网文档执行如下命令:
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh 执行过程输出:
[root@node1 ~]# curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 6064k 100 6064k 0 0 6327k 0 --:--:-- --:--:-- --:--:-- 6323k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Successfully set mirror to https://tiup-mirrors.pingcap.com
Detected shell: bash
Shell profile: /root/.bash_profile
/root/.bash_profile has been modified to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try: tiup playground
===============================================四、更新环境变量并升级TiUP
[root@node1 ~]# which tiup
/root/.tiup/bin/tiup
[root@node1 ~]# tiup update --self && tiup update cluster
download https://tiup-mirrors.pingcap.com/tiup-v1.6.0-linux-amd64.tar.gz 5.92 MiB / 5.92 MiB 100.00% 15.45 MiB/s
Updated successfully!
download https://tiup-mirrors.pingcap.com/cluster-v1.6.0-linux-amd64.tar.gz 7.29 MiB / 7.29 MiB 100.00% 10.93 MiB/s
Updated successfully!
[root@node1 ~]# tiup --binary cluster
/root/.tiup/components/cluster/v1.6.0/tiup-cluster五:初始化群集拓扑文件并编辑
[root@node1 ~]# tiup cluster template > topology.yaml
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.0/tiup-cluster template
[root@node1 ~]# ls
anaconda-ks.cfg topology.yaml topology.yaml文件内容如下:
# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
# # The user who runs the tidb cluster.
user: "tidb"
ssh_port: 22
deploy_dir: "/tidb-deploy"
data_dir: "/tidb-data"
arch: "amd64"
monitored:
node_exporter_port: 9100
blackbox_exporter_port: 9115
pd_servers:
- host: 192.168.59.27
- host: 192.168.59.28
- host: 192.168.59.29
tidb_servers:
- host: 192.168.59.27
- host: 192.168.59.28
- host: 192.168.59.29
tikv_servers:
- host: 192.168.59.27
- host: 192.168.59.28
- host: 192.168.59.29
monitoring_servers:
- host: 192.168.59.26
grafana_servers:
- host: 192.168.59.26
alertmanager_servers:
- host: 192.168.59.26六、执行部署命令
1.检查和自动修复集群存在的潜在风险
[root@node1 ~]# tiup cluster check ./topology.yaml --apply --user root -p
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.0/tiup-cluster check ./topology.yaml --apply --user root -p
Input SSH password:
+ Detect CPU Arch
- Detecting node 192.168.59.27 ... Done
- Detecting node 192.168.59.28 ... Done
- Detecting node 192.168.59.29 ... Done
- Detecting node 192.168.59.26 ... Done
+ Download necessary tools
- Downloading check tools for linux/amd64 ... Done
+ Collect basic system information
+ Collect basic system information
- Getting system info of 192.168.59.27:22 ... ⠏ CopyComponent: component=insight, version=, remote=192.168.59.27:/tmp/tiup os=linux, arch=amd64
+ Collect basic system information
+ Collect basic system information
- Getting system info of 192.168.59.27:22 ... Done
- Getting system info of 192.168.59.28:22 ... Done
- Getting system info of 192.168.59.29:22 ... Done
- Getting system info of 192.168.59.26:22 ... Done
+ Check system requirements
- Checking node 192.168.59.27 ... ⠙ CheckSys: host=192.168.59.27 type=insight
+ Check system requirements
- Checking node 192.168.59.27 ... ⠹ CheckSys: host=192.168.59.27 type=insight
+ Check system requirements
- Checking node 192.168.59.27 ... ⠴ CheckSys: host=192.168.59.27 type=insight
- Checking node 192.168.59.28 ... ⠴
+ Check system requirements
+ Check system requirements
- Checking node 192.168.59.27 ... Done
- Checking node 192.168.59.28 ... Done
- Checking node 192.168.59.29 ... Done
- Checking node 192.168.59.27 ... Done
- Checking node 192.168.59.28 ... Done
- Checking node 192.168.59.29 ... Done
- Checking node 192.168.59.27 ... Done
- Checking node 192.168.59.28 ... Done
- Checking node 192.168.59.29 ... Done
- Checking node 192.168.59.26 ... Done
- Checking node 192.168.59.26 ... Done
- Checking node 192.168.59.26 ... Done
+ Cleanup check files
- Cleanup check files on 192.168.59.27:22 ... Done
- Cleanup check files on 192.168.59.28:22 ... Done
- Cleanup check files on 192.168.59.29:22 ... Done
- Cleanup check files on 192.168.59.27:22 ... Done
- Cleanup check files on 192.168.59.28:22 ... Done
- Cleanup check files on 192.168.59.29:22 ... Done
- Cleanup check files on 192.168.59.27:22 ... Done
- Cleanup check files on 192.168.59.28:22 ... Done
- Cleanup check files on 192.168.59.29:22 ... Done
- Cleanup check files on 192.168.59.26:22 ... Done
- Cleanup check files on 192.168.59.26:22 ... Done
- Cleanup check files on 192.168.59.26:22 ... Done
Node Check Result Message
---- ----- ------ -------
192.168.59.27 os-version Pass OS is CentOS Linux 7 (Core) 7.6.1810
192.168.59.27 cpu-cores Pass number of CPU cores / threads: 2
192.168.59.27 memory Pass memory size is 4728MB
192.168.59.27 network Pass network speed of ens160 is 10000MB
192.168.59.27 disk Warn mount point / does not have 'noatime' option set, auto fixing not supported
192.168.59.27 disk Warn mount point / does not have 'noatime' option set, auto fixing not supported
192.168.59.27 selinux Pass SELinux is disabled
192.168.59.27 thp Pass THP is disabled
192.168.59.27 command Pass numactl: policy: default
192.168.59.28 os-version Pass OS is CentOS Linux 7 (Core) 7.6.1810
192.168.59.28 cpu-cores Pass number of CPU cores / threads: 2
192.168.59.28 memory Pass memory size is 4728MB
192.168.59.28 network Pass network speed of ens160 is 10000MB
192.168.59.28 disk Warn mount point / does not have 'noatime' option set, auto fixing not supported
192.168.59.28 disk Warn mount point / does not have 'noatime' option set, auto fixing not supported
192.168.59.28 selinux Pass SELinux is disabled
192.168.59.28 thp Pass THP is disabled
192.168.59.28 command Pass numactl: policy: default
192.168.59.29 os-version Pass OS is CentOS Linux 7 (Core) 7.6.1810
192.168.59.29 cpu-cores Pass number of CPU cores / threads: 2
192.168.59.29 memory Pass memory size is 4728MB
192.168.59.29 network Pass network speed of ens160 is 10000MB
192.168.59.29 disk Warn mount point / does not have 'noatime' option set, auto fixing not supported
192.168.59.29 disk Warn mount point / does not have 'noatime' option set, auto fixing not supported
192.168.59.29 selinux Pass SELinux is disabled
192.168.59.29 thp Pass THP is disabled
192.168.59.29 command Pass numactl: policy: default
192.168.59.26 os-version Pass OS is CentOS Linux 7 (Core) 7.6.1810
192.168.59.26 cpu-cores Pass number of CPU cores / threads: 2
192.168.59.26 memory Pass memory size is 4728MB
192.168.59.26 network Pass network speed of ens160 is 10000MB
192.168.59.26 disk Warn mount point / does not have 'noatime' option set, auto fixing not supported
192.168.59.26 disk Warn mount point / does not have 'noatime' option set, auto fixing not supported
192.168.59.26 selinux Pass SELinux is disabled
192.168.59.26 thp Pass THP is disabled
192.168.59.26 command Pass numactl: policy: default
+ Try to apply changes to fix failed checks
- Applying changes on 192.168.59.27 ... Done
- Applying changes on 192.168.59.28 ... Done
- Applying changes on 192.168.59.29 ... Done
- Applying changes on 192.168.59.26 ... Done 2.通过检查后部署名为tidb-test版本为v5.2.1的TiDB数据库
[root@node1 ~]# tiup cluster deploy tidb-test v5.2.1 ./topology.yaml --user root -p
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.0/tiup-cluster deploy tidb-test v5.2.1 ./topology.yaml --user root -p
Input SSH password:
+ Detect CPU Arch
- Detecting node 192.168.59.27 ... Done
- Detecting node 192.168.59.28 ... Done
- Detecting node 192.168.59.29 ... Done
- Detecting node 192.168.59.26 ... Done
Please confirm your topology:
Cluster type: tidb
Cluster name: tidb-test
Cluster version: v5.2.1
Role Host Ports OS/Arch Directories
---- ---- ----- ------- -----------
pd 192.168.59.27 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd 192.168.59.28 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379
pd 192.168.59.29 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv 192.168.59.27 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv 192.168.59.28 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv 192.168.59.29 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tidb 192.168.59.27 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
tidb 192.168.59.28 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
tidb 192.168.59.29 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
prometheus 192.168.59.26 9090 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana 192.168.59.26 3000 linux/x86_64 /tidb-deploy/grafana-3000
alertmanager 192.168.59.26 9093/9094 linux/x86_64 /tidb-deploy/alertmanager-9093,/tidb-data/alertmanager-9093
Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ Generate SSH keys ... Done
+ Download TiDB components
- Download pd:v5.2.1 (linux/amd64) ... Done
- Download tikv:v5.2.1 (linux/amd64) ... Done
- Download tidb:v5.2.1 (linux/amd64) ... Done
- Download prometheus:v5.2.1 (linux/amd64) ... Done
- Download grafana:v5.2.1 (linux/amd64) ... Done
- Download alertmanager: (linux/amd64) ... Done
- Download node_exporter: (linux/amd64) ... Done
- Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments
- Prepare 192.168.59.27:22 ... Done
- Prepare 192.168.59.28:22 ... Done
- Prepare 192.168.59.29:22 ... Done
- Prepare 192.168.59.26:22 ... Done
+ Copy files
- Copy pd -> 192.168.59.27 ... Done
- Copy pd -> 192.168.59.28 ... Done
- Copy pd -> 192.168.59.29 ... Done
- Copy tikv -> 192.168.59.27 ... Done
- Copy tikv -> 192.168.59.28 ... Done
- Copy tikv -> 192.168.59.29 ... Done
- Copy tidb -> 192.168.59.27 ... Done
- Copy tidb -> 192.168.59.28 ... Done
- Copy tidb -> 192.168.59.29 ... Done
- Copy prometheus -> 192.168.59.26 ... Done
- Copy grafana -> 192.168.59.26 ... Done
- Copy alertmanager -> 192.168.59.26 ... Done
- Copy node_exporter -> 192.168.59.27 ... Done
- Copy node_exporter -> 192.168.59.28 ... Done
- Copy node_exporter -> 192.168.59.29 ... Done
- Copy node_exporter -> 192.168.59.26 ... Done
- Copy blackbox_exporter -> 192.168.59.27 ... Done
- Copy blackbox_exporter -> 192.168.59.28 ... Done
- Copy blackbox_exporter -> 192.168.59.29 ... Done
- Copy blackbox_exporter -> 192.168.59.26 ... Done
+ Check status
Enabling component pd
Enabling instance 192.168.59.29:2379
Enabling instance 192.168.59.27:2379
Enabling instance 192.168.59.28:2379
Enable instance 192.168.59.28:2379 success
Enable instance 192.168.59.29:2379 success
Enable instance 192.168.59.27:2379 success
Enabling component tikv
Enabling instance 192.168.59.29:20160
Enabling instance 192.168.59.27:20160
Enabling instance 192.168.59.28:20160
Enable instance 192.168.59.28:20160 success
Enable instance 192.168.59.29:20160 success
Enable instance 192.168.59.27:20160 success
Enabling component tidb
Enabling instance 192.168.59.29:4000
Enabling instance 192.168.59.27:4000
Enabling instance 192.168.59.28:4000
Enable instance 192.168.59.28:4000 success
Enable instance 192.168.59.27:4000 success
Enable instance 192.168.59.29:4000 success
Enabling component prometheus
Enabling instance 192.168.59.26:9090
Enable instance 192.168.59.26:9090 success
Enabling component grafana
Enabling instance 192.168.59.26:3000
Enable instance 192.168.59.26:3000 success
Enabling component alertmanager
Enabling instance 192.168.59.26:9093
Enable instance 192.168.59.26:9093 success
Enabling component node_exporter
Enabling instance 192.168.59.26
Enabling instance 192.168.59.28
Enabling instance 192.168.59.27
Enabling instance 192.168.59.29
Enable 192.168.59.27 success
Enable 192.168.59.28 success
Enable 192.168.59.29 success
Enable 192.168.59.26 success
Enabling component blackbox_exporter
Enabling instance 192.168.59.26
Enabling instance 192.168.59.28
Enabling instance 192.168.59.27
Enabling instance 192.168.59.29
Enable 192.168.59.28 success
Enable 192.168.59.27 success
Enable 192.168.59.29 success
Enable 192.168.59.26 success
Cluster `tidb-test` deployed successfully, you can start it with command: `tiup cluster start tidb-test` 3.启动数据库并查看状态
[root@node1 ~]# tiup cluster start tidb-test
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.0/tiup-cluster start tidb-test
Starting cluster tidb-test...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/tidb-test/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.28
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.27
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.29
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.27
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.28
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.29
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.26
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.28
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.26
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.26
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.29
+ [Parallel] - UserSSH: user=tidb, host=192.168.59.27
+ [ Serial ] - StartCluster
Starting component pd
Starting instance 192.168.59.27:2379
Starting instance 192.168.59.28:2379
Starting instance 192.168.59.29:2379
Start instance 192.168.59.29:2379 success
Start instance 192.168.59.28:2379 success
Start instance 192.168.59.27:2379 success
Starting component tikv
Starting instance 192.168.59.29:20160
Starting instance 192.168.59.27:20160
Starting instance 192.168.59.28:20160
Start instance 192.168.59.29:20160 success
Start instance 192.168.59.28:20160 success
Start instance 192.168.59.27:20160 success
Starting component tidb
Starting instance 192.168.59.29:4000
Starting instance 192.168.59.27:4000
Starting instance 192.168.59.28:4000
Start instance 192.168.59.27:4000 success
Start instance 192.168.59.29:4000 success
Start instance 192.168.59.28:4000 success
Starting component prometheus
Starting instance 192.168.59.26:9090
Start instance 192.168.59.26:9090 success
Starting component grafana
Starting instance 192.168.59.26:3000
Start instance 192.168.59.26:3000 success
Starting component alertmanager
Starting instance 192.168.59.26:9093
Start instance 192.168.59.26:9093 success
Starting component node_exporter
Starting instance 192.168.59.26
Starting instance 192.168.59.28
Starting instance 192.168.59.27
Starting instance 192.168.59.29
Start 192.168.59.29 success
Start 192.168.59.27 success
Start 192.168.59.28 success
Start 192.168.59.26 success
Starting component blackbox_exporter
Starting instance 192.168.59.26
Starting instance 192.168.59.28
Starting instance 192.168.59.27
Starting instance 192.168.59.29
Start 192.168.59.29 success
Start 192.168.59.27 success
Start 192.168.59.28 success
Start 192.168.59.26 success
+ [ Serial ] - UpdateTopology: cluster=tidb-test
Started cluster `tidb-test` successfully
[root@node1 ~]# tiup cluster display tidb-test
Starting component `cluster`: /root/.tiup/components/cluster/v1.6.0/tiup-cluster display tidb-test
Cluster type: tidb
Cluster name: tidb-test
Cluster version: v5.2.1
Deploy user: tidb
SSH type: builtin
Dashboard URL: http://192.168.59.29:2379/dashboard
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
192.168.59.26:9093 alertmanager 192.168.59.26 9093/9094 linux/x86_64 Up /tidb-data/alertmanager-9093 /tidb-deploy/alertmanager-9093
192.168.59.26:3000 grafana 192.168.59.26 3000 linux/x86_64 Up - /tidb-deploy/grafana-3000
192.168.59.27:2379 pd 192.168.59.27 2379/2380 linux/x86_64 Up /tidb-data/pd-2379 /tidb-deploy/pd-2379
192.168.59.28:2379 pd 192.168.59.28 2379/2380 linux/x86_64 Up|L /tidb-data/pd-2379 /tidb-deploy/pd-2379
192.168.59.29:2379 pd 192.168.59.29 2379/2380 linux/x86_64 Up|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379
192.168.59.26:9090 prometheus 192.168.59.26 9090 linux/x86_64 Up /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090
192.168.59.27:4000 tidb 192.168.59.27 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
192.168.59.28:4000 tidb 192.168.59.28 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
192.168.59.29:4000 tidb 192.168.59.29 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
192.168.59.27:20160 tikv 192.168.59.27 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
192.168.59.28:20160 tikv 192.168.59.28 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
192.168.59.29:20160 tikv 192.168.59.29 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
Total nodes: 12七、报错及解决方案
1.irqbalance包正常安装,但启动不了,检查一直报如下错误:
192.168.59.27 service Fail service irqbalance is not running 安装过程输出:
[root@node1 ~]# yum install irqbalance
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.163.com
* extras: mirrors.aliyun.com
* updates: centosj9.centos.org
Resolving Dependencies
--> Running transaction check
---> Package irqbalance.x86_64 3:1.0.7-11.el7 will be updated
---> Package irqbalance.x86_64 3:1.0.7-12.el7 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
=================================================================================================================================================================================================================
Package Arch Version Repository Size
=================================================================================================================================================================================================================
Updating:
irqbalance x86_64 3:1.0.7-12.el7 base 45 k
Transaction Summary
=================================================================================================================================================================================================================
Upgrade 1 Package
Total download size: 45 k
Is this ok [y/d/N]: y
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
irqbalance-1.0.7-12.el7.x86_64.rpm | 45 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : 3:irqbalance-1.0.7-12.el7.x86_64 1/2
Cleanup : 3:irqbalance-1.0.7-11.el7.x86_64 2/2
Verifying : 3:irqbalance-1.0.7-12.el7.x86_64 1/2
Verifying : 3:irqbalance-1.0.7-11.el7.x86_64 2/2
Updated:
irqbalance.x86_64 3:1.0.7-12.el7
Complete!
[root@node1 ~]# systemctl start irqbalance.service
[root@node1 ~]# systemctl status irqbalance.service
● irqbalance.service - irqbalance daemon
Loaded: loaded (/usr/lib/systemd/system/irqbalance.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Thu 2021-10-21 11:23:45 CST; 5s ago
Process: 6299 ExecStart=/usr/sbin/irqbalance --foreground $IRQBALANCE_ARGS (code=exited, status=0/SUCCESS)
Main PID: 6299 (code=exited, status=0/SUCCESS)
Oct 21 11:23:45 node1 systemd[1]: Started irqbalance daemon. 分析原因:irqbalance是调度CPU服务,但虚拟机CPU只有1槽1核,不存在调度,所以服务启动不报错,但启动不了。关闭虚拟机,变更虚拟机CPU为1槽2核,启动操作系统,irqbalance可以正常启动

「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。




