TIdb 安装运维基础
1 下载并安装 TiUP:
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
######################################################################################
[root@test tidb]# curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7087k 100 7087k 0 0 6941k 0 0:00:01 0:00:01 --:--:-- 6948k
WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Successfully set mirror to https://tiup-mirrors.pingcap.com
Detected shell: bash
Shell profile: /root/.bash_profile
/root/.bash_profile has been modified to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try: tiup playground
===============================================
[root@test tidb]# cat /root/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export PATH=/root/.tiup/bin:$PATH
######################################################################################
2 声明全局环境变量:
[root@test tidb]# source /root/.bash_profile
3 安装 TiUP 的 cluster 组件:
tiup cluster
¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥
tiup is checking updates for component cluster ...
A new version of cluster is available:
The latest version: v1.11.1
Local installed version:
Update current component: tiup update cluster
Update all components: tiup update --all
The component `cluster` version is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/cluster-v1.11.1-linux-amd64.tar.gz 8.44 MiB / 8.44 MiB 100.00% 6.81 MiB/s
Starting component `cluster`: /root/.tiup/components/cluster/v1.11.1/tiup-cluster
Deploy a TiDB cluster for production
Usage:
tiup cluster [command]
Available Commands:
check Perform preflight checks for the cluster.
deploy Deploy a cluster for production
start Start a TiDB cluster
stop Stop a TiDB cluster
restart Restart a TiDB cluster
scale-in Scale in a TiDB cluster
scale-out Scale out a TiDB cluster
destroy Destroy a specified cluster
clean (EXPERIMENTAL) Cleanup a specified cluster
upgrade Upgrade a specified TiDB cluster
display Display information of a TiDB cluster
prune Destroy and remove instances that is in tombstone state
list List all clusters
audit Show audit log of cluster operation
import Import an exist TiDB cluster from TiDB-Ansible
edit-config Edit TiDB cluster config
show-config Show TiDB cluster config
reload Reload a TiDB cluster's config and restart if needed
patch Replace the remote package with a specified package and restart the service
rename Rename the cluster
enable Enable a TiDB cluster automatically at boot
disable Disable automatic enabling of TiDB clusters at boot
replay Replay previous operation and skip successed steps
template Print topology template
tls Enable/Disable TLS between TiDB components
meta backup/restore meta information
help Help about any command
completion Generate the autocompletion script for the specified shell
Flags:
-c, --concurrency int max number of parallel tasks allowed (default 5)
--format string (EXPERIMENTAL) The format of output, available values are [default, json] (default "default")
-h, --help help for tiup
--ssh string (EXPERIMENTAL) The executor type: 'builtin', 'system', 'none'.
--ssh-timeout uint Timeout in seconds to connect host via SSH, ignored for operations that don't need an SSH connection. (default 5)
-v, --version version for tiup
--wait-timeout uint Timeout in seconds to wait for an operation to complete, ignored for operations that don't fit. (default 120)
-y, --yes Skip all confirmations and assumes 'yes'
Use "tiup cluster help [command]" for more information about a command.
¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥¥
4 如果机器已经安装 TiUP cluster,需要更新软件版本
tiup update --self && tiup update cluster
))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))
[root@test tidb]# tiup update --self && tiup update cluster
download https://tiup-mirrors.pingcap.com/tiup-v1.11.1-linux-amd64.tar.gz 6.92 MiB / 6.92 MiB 100.00% 10.11 MiB/s
Updated successfully!
component cluster version v1.11.1 is already installed
Updated successfully!
))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))
5 由于模拟多机部署,需要通过 root 用户调大 sshd 服务的连接数限制:
5.1修改 /etc/ssh/sshd_config 将 MaxSessions 调至 20。
[root@test tidb]# vim /etc/ssh/sshd_config
5.2 重启 sshd 服务:
[root@test tidb]# service sshd restart
6 创建并启动集群
[root@test tidb]# vi topo.yml
按下面的配置模板,编辑配置文件,命名为 topo.yaml
,其中:
user: "tidb":表示通过 tidb 系统用户(部署会自动创建)来做集群的内部管理,默认使用 22 端口通过 ssh 登录目标机器
replication.enable-placement-rules:设置这个 PD 参数来确保 TiFlash 正常运行
host:设置为本部署主机的 IP
配置模板如文件:topo.yml
[root@test tidb]# tiup list tidb
Available versions for tidb:
Version Installed Release Platforms
------- --------- ------- ---------
build-debug-mode 2022-06-10T14:29:34+08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64
nightly -> v6.5.0-alpha-nightly-20221123 2022-11-23T23:35:40+08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64
v3.0.0 2020-04-16T14:03:31+08:00 darwin/amd64,linux/amd64
v3.0 2020-04-16T16:58:06+08:00 darwin/amd64,linux/amd64
....
v5.4.2 2022-07-08T10:12:37+08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64
v5.4.3 2022-10-13T22:13:21+08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64
v6.0.0 2022-04-06T11:34:40+08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64
v6.1.0 2022-06-13T12:30:16+08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64
v6.1.1 2022-09-01T12:09:05+08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64
v6.1.2 2022-10-24T15:16:17+08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64
v6.2.0 2022-08-23T09:14:36+08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64
v6.3.0 2022-09-30T10:59:36+08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64
v6.4.0 2022-11-17T11:26:23+08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64
v6.5.0-alpha-nightly-20221123 2022-11-23T23:35:40+08:00 darwin/amd64,darwin/arm64,linux/amd64,linux/arm64
[root@test tidb]#
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
7 执行集群部署命令:
tiup cluster deploy <cluster-name> <tidb-version> ./topo.yaml --user root -p
参数 <cluster-name> 表示设置集群名称
参数 <tidb-version> 表示设置集群版本,可以通过 tiup list tidb 命令来查看当前支持部署的 TiDB 版本
参数 -p 表示在连接目标机器时使用密码登录
按照引导,输入”y”及 root 密码,来完成部署:
Do you want to continue? [y/N]: y
Input SSH password:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
[root@test tidb]# tiup cluster deploy testti v6.4.0 ./topo.yml --user root -p
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.11.1/tiup-cluster deploy testti v6.4.0 ./topo.yml --user root -p
Input SSH password:
+ Detect CPU Arch Name
- Detecting node 172.16.9.124 Arch info ... Done
+ Detect CPU OS Name
- Detecting node 172.16.9.124 OS info ... Done
Please confirm your topology:
Cluster type: tidb
Cluster name: testti
Cluster version: v6.4.0
Role Host Ports OS/Arch Directories
---- ---- ----- ------- -----------
pd 172.16.9.124 2379/2380 linux/x86_64 /tidb-deploy/pd-2379,/tidb-data/pd-2379
tikv 172.16.9.124 20160/20180 linux/x86_64 /tidb-deploy/tikv-20160,/tidb-data/tikv-20160
tikv 172.16.9.124 20161/20181 linux/x86_64 /tidb-deploy/tikv-20161,/tidb-data/tikv-20161
tikv 172.16.9.124 20162/20182 linux/x86_64 /tidb-deploy/tikv-20162,/tidb-data/tikv-20162
tidb 172.16.9.124 4000/10080 linux/x86_64 /tidb-deploy/tidb-4000
tiflash 172.16.9.124 9000/8123/3930/20170/20292/8234 linux/x86_64 /tidb-deploy/tiflash-9000,/tidb-data/tiflash-9000
prometheus 172.16.9.124 9090/12020 linux/x86_64 /tidb-deploy/prometheus-9090,/tidb-data/prometheus-9090
grafana 172.16.9.124 3000 linux/x86_64 /tidb-deploy/grafana-3000
Attention:
1. If the topology is not what you expected, check your yaml file.
2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N) y
+ Generate SSH keys ... Done
+ Download TiDB components
- Download pd:v6.4.0 (linux/amd64) ... Done
- Download tikv:v6.4.0 (linux/amd64) ... Done
- Download tidb:v6.4.0 (linux/amd64) ... Done
- Download tiflash:v6.4.0 (linux/amd64) ... Done
- Download prometheus:v6.4.0 (linux/amd64) ... Done
- Download grafana:v6.4.0 (linux/amd64) ... Done
- Download node_exporter: (linux/amd64) ... Done
- Download blackbox_exporter: (linux/amd64) ... Done
+ Initialize target host environments
- Prepare 172.16.9.124:22 ... Done
+ Deploy TiDB instance
- Copy pd -> 172.16.9.124 ... Done
- Copy tikv -> 172.16.9.124 ... Done
- Copy tikv -> 172.16.9.124 ... Done
- Copy tikv -> 172.16.9.124 ... Done
- Copy tidb -> 172.16.9.124 ... Done
- Copy tiflash -> 172.16.9.124 ... Done
- Copy prometheus -> 172.16.9.124 ... Done
- Copy grafana -> 172.16.9.124 ... Done
- Deploy node_exporter -> 172.16.9.124 ... Done
- Deploy blackbox_exporter -> 172.16.9.124 ... Done
+ Copy certificate to remote host
+ Init instance configs
- Generate config pd -> 172.16.9.124:2379 ... Done
- Generate config tikv -> 172.16.9.124:20160 ... Done
- Generate config tikv -> 172.16.9.124:20161 ... Done
- Generate config tikv -> 172.16.9.124:20162 ... Done
- Generate config tidb -> 172.16.9.124:4000 ... Done
- Generate config tiflash -> 172.16.9.124:9000 ... Done
- Generate config prometheus -> 172.16.9.124:9090 ... Done
- Generate config grafana -> 172.16.9.124:3000 ... Done
+ Init monitor configs
- Generate config node_exporter -> 172.16.9.124 ... Done
- Generate config blackbox_exporter -> 172.16.9.124 ... Done
+ Check status
Enabling component pd
Enabling instance 172.16.9.124:2379
Enable instance 172.16.9.124:2379 success
Enabling component tikv
Enabling instance 172.16.9.124:20162
Enabling instance 172.16.9.124:20160
Enabling instance 172.16.9.124:20161
Enable instance 172.16.9.124:20160 success
Enable instance 172.16.9.124:20161 success
Enable instance 172.16.9.124:20162 success
Enabling component tidb
Enabling instance 172.16.9.124:4000
Enable instance 172.16.9.124:4000 success
Enabling component tiflash
Enabling instance 172.16.9.124:9000
Enable instance 172.16.9.124:9000 success
Enabling component prometheus
Enabling instance 172.16.9.124:9090
Enable instance 172.16.9.124:9090 success
Enabling component grafana
Enabling instance 172.16.9.124:3000
Enable instance 172.16.9.124:3000 success
Enabling component node_exporter
Enabling instance 172.16.9.124
Enable 172.16.9.124 success
Enabling component blackbox_exporter
Enabling instance 172.16.9.124
Enable 172.16.9.124 success
Cluster `testti` deployed successfully, you can start it with command: `tiup cluster start testti --init`
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
8 启动集群:
tiup cluster start <cluster-name>
tiup cluster start testti --init 初始化集群设置密码
tiup cluster start testti 启动集群,空密码
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
[root@test tidb]# tiup cluster start testti --init
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.11.1/tiup-cluster start testti --init
Starting cluster testti...
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/testti/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/testti/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=172.16.9.124
+ [Parallel] - UserSSH: user=tidb, host=172.16.9.124
+ [Parallel] - UserSSH: user=tidb, host=172.16.9.124
+ [Parallel] - UserSSH: user=tidb, host=172.16.9.124
+ [Parallel] - UserSSH: user=tidb, host=172.16.9.124
+ [Parallel] - UserSSH: user=tidb, host=172.16.9.124
+ [Parallel] - UserSSH: user=tidb, host=172.16.9.124
+ [Parallel] - UserSSH: user=tidb, host=172.16.9.124
+ [ Serial ] - StartCluster
Starting component pd
Starting instance 172.16.9.124:2379
Start instance 172.16.9.124:2379 success
Starting component tikv
Starting instance 172.16.9.124:20162
Starting instance 172.16.9.124:20160
Starting instance 172.16.9.124:20161
Start instance 172.16.9.124:20162 success
Start instance 172.16.9.124:20161 success
Start instance 172.16.9.124:20160 success
Starting component tidb
Starting instance 172.16.9.124:4000
Start instance 172.16.9.124:4000 success
Starting component tiflash
Starting instance 172.16.9.124:9000
Start instance 172.16.9.124:9000 success
Starting component prometheus
Starting instance 172.16.9.124:9090
Start instance 172.16.9.124:9090 success
Starting component grafana
Starting instance 172.16.9.124:3000
Start instance 172.16.9.124:3000 success
Starting component node_exporter
Starting instance 172.16.9.124
Start 172.16.9.124 success
Starting component blackbox_exporter
Starting instance 172.16.9.124
Start 172.16.9.124 success
+ [ Serial ] - UpdateTopology: cluster=testti
Started cluster `testti` successfully
The root password of TiDB database has been changed.
The new password is: '87K2evDTE&=S5*_P64'.
Copy and record it to somewhere safe, it is only displayed once, and will not be stored.
The generated password can NOT be get and shown again.
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
9 访问集群:
安装 MySQL 客户端。如果已安装 MySQL 客户端则可跳过这一步骤。
yum -y install mysql
访问 TiDB 数据库,
mysql -h 172.16.9.124 -P 4000 -u root #密码为空:
mysql -h 172.16.9.124 -P 4000 -u root -p #初始化密码非空:
访问 TiDB 的 Grafana 监控:
通过 http://{grafana-ip}:3000 访问集群 Grafana 监控页面,默认用户名和密码均为 admin。 首次登录修改
grafana:用户名:admin密码:
http://172.16.9.124:3000
访问 TiDB 的 Dashboard:
通过 http://{pd-ip}:2379/dashboard 访问集群 TiDB Dashboard 监控页面,默认用户名为 root,密码为空。
执行以下命令确认当前已经部署的集群列表:
tiup cluster list
执行以下命令查看集群的拓扑结构和状态:
tiup cluster display testti
Starting component pd:2379
Starting component tikv:20162、20160、20161
tidb:4000
tiflash:9000
prometheus:9090
grafana:3000
node_exporter_port: 9100
blackbox_exporter_port: 9115




