暂无图片
暂无图片
6
暂无图片
暂无图片
暂无图片

TiDB v8.5.3 单机集群部署指南

原创 Lucifer三思而后行 2025-08-26
1034

前言

最近在做 TiDB 的恢复演练,需要在单台 Linux 服务器上部署一套 TiDB 最小的完整拓扑的集群,本文记录一下安装过程。

环境准备

开始部署 TiDB 集群前,准备一台部署主机,确保其软件满足需求:

  • 推荐安装 CentOS 7.3 及以上版本
  • 运行环境可以支持互联网访问,用于下载 TiDB 及相关软件安装包

注意:TiDB 从 v8.5.1 版本起重新适配 glibc 2.17,恢复了对 CentOS Linux 7 的兼容性支持。

环境信息

最小规模的 TiDB 集群拓扑包含以下实例:

组件 数量 IP 端口配置
PD 1 192.168.31.79 2379/2380
TiDB 1 192.168.31.79 4000/10080
TiKV 3 192.168.31.79 20160-20162/20180-20182
TiFlash 1 192.168.31.79 9000/3930/20170/20292/8234/8123
Prometheus 1 192.168.31.79 9090/12020
Grafana 1 192.168.31.79 3000

安装依赖库

编译和构建 TiDB 所需的依赖库:

  • Golang 1.23 及以上版本
  • Rust nightly-2023-12-28 及以上版本
  • LLVM 17.0 及以上版本
  • sshpass 1.06 及以上
  • GCC 7.x(不满足)
  • glibc 2.28-151.el8 版本(不满足)

下载所需依赖包:

  • Rust 下载地址:https://forge.rust-lang.org/infra/other-installation-methods.html
  • Golang 下载地址:https://go.dev/dl/
  • sshpass 下载地址:https://sourceforge.net/projects/sshpass/files/latest/download

Golang 安装:

[root@test soft]# tar -C /usr/local -xf go1.25.0.linux-amd64.tar.gz [root@test ~]# cat<<-\EOF>>/root/.bash_profile export PATH=$PATH:/usr/local/go/bin EOF [root@test ~]# source /root/.bash_profile [root@test ~]# go version go version go1.25.0 linux/amd64

Rust 安装:

[root@test soft]# tar -xf rust-1.89.0-x86_64-unknown-linux-gnu.tar.tar [root@test soft]# cd rust-1.89.0-x86_64-unknown-linux-gnu/ [root@test rust-1.89.0-x86_64-unknown-linux-gnu]# ./install.sh [root@test ~]# rustc --version rustc 1.89.0 (29483883e 2025-08-04)

sshpass 安装:

[root@test soft]# tar -xf sshpass-1.10.tar.gz [root@test soft]# cd sshpass-1.10/ [root@test sshpass-1.10]# ./configure && make && make install [root@test ~]# sshpass -V sshpass 1.10

关闭防火墙

[root@test ~]# systemctl stop firewalld.service [root@test ~]# systemctl disable firewalld.service [root@test ~]# systemctl status firewalld.service ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1)

检测及关闭 swap

[root@test ~]# echo "vm.swappiness = 0">> /etc/sysctl.conf [root@test ~]# swapoff -a [root@test ~]# sysctl -p vm.swappiness = 0

记得修改 /etc/fstab 配置,注释掉 swap 分区:

#/dev/mapper/centos-swap swap swap defaults 0 0

检查和配置操作系统优化参数

[root@test ~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled [root@test ~]# echo never > /sys/kernel/mm/transparent_hugepage/defrag [root@test ~]# cat<<EOF>>/etc/sysctl.conf fs.file-max = 1000000 net.core.somaxconn = 32768 net.ipv4.tcp_tw_recycle = 0 net.ipv4.tcp_syncookies = 0 vm.overcommit_memory = 1 EOF [root@test ~]# sysctl -p [root@test ~]# cat<<EOF>>/etc/security/limits.conf tidb soft nofile 1000000 tidb hard nofile 1000000 tidb soft stack 32768 tidb hard stack 32768 EOF

调整 MaxSessions

由于模拟多机部署,需要通过 root 用户调大 sshd 服务的连接数限制:

[root@test ~]# vim /etc/ssh/sshd_config ## 调整 MaxSessions 20 [root@test ~]# systemctl restart sshd.service

创建 TiDB 用户

[root@test ~]# useradd tidb [root@test ~]# echo "Tidb@123" |passwd tidb --stdin Changing password for user tidb. passwd: all authentication tokens updated successfully. [root@test ~]# cat<<-EOF>>/etc/sudoers tidb ALL=(ALL) NOPASSWD: ALL EOF

实施部署

本文是内网环境,不使用官方在线源安装,使用本地镜像源进行部署,本地镜像源部署请参考:TiDB 离线部署 TiUP 组件

tiup 已部署完成:

[root@test ~]# tiup mirror show /root/tidb-community-server-v8.5.3-linux-amd64 [root@test ~]# tiup --version 1.16.2 tiup Go Version: go1.21.13 Git Ref: v1.16.2 GitHash: 678c52de0c0ef30634b8ba7302a8376caa95d50d

创建并启动集群:

[root@test ~]# cat<<-\EOF>topo.yaml # # Global variables are applied to all deployments and used as the default value of # # the deployments if a specific deployment value is missing. global: user: "tidb" ssh_port: 11122 deploy_dir: "/data/tidb-deploy" data_dir: "/data/tidb-data" # # Monitored variables are applied to all the machines. monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115 server_configs: tidb: instance.tidb_slow_log_threshold: 300 tikv: readpool.storage.use-unified-pool: false readpool.coprocessor.use-unified-pool: true pd: replication.enable-placement-rules: true replication.location-labels: ["host"] tiflash: logger.level: "info" pd_servers: - host: 192.168.31.79 tidb_servers: - host: 192.168.31.79 tikv_servers: - host: 192.168.31.79 port: 20160 status_port: 20180 config: server.labels: { host: "logic-host-1" } - host: 192.168.31.79 port: 20161 status_port: 20181 config: server.labels: { host: "logic-host-2" } - host: 192.168.31.79 port: 20162 status_port: 20182 config: server.labels: { host: "logic-host-3" } tiflash_servers: - host: 192.168.31.79 monitoring_servers: - host: 192.168.31.79 grafana_servers: - host: 192.168.31.79 EOF

安装前预检查:

[root@test ~]# tiup cluster check topo.yaml --user root -p Input SSH password: + Detect CPU Arch Name - Detecting node 192.168.31.79 Arch info ... Done + Detect CPU OS Name - Detecting node 192.168.31.79 OS info ... Done + Download necessary tools - Downloading check tools for linux/amd64 ... Done + Collect basic system information + Collect basic system information - Getting system info of 192.168.31.79:11122 ... Done + Check time zone - Checking node 192.168.31.79 ... Done + Check system requirements + Check system requirements + Check system requirements + Check system requirements - Checking node 192.168.31.79 ... Done - Checking node 192.168.31.79 ... Done - Checking node 192.168.31.79 ... Done - Checking node 192.168.31.79 ... Done - Checking node 192.168.31.79 ... Done - Checking node 192.168.31.79 ... Done - Checking node 192.168.31.79 ... Done - Checking node 192.168.31.79 ... Done - Checking node 192.168.31.79 ... Done + Cleanup check files - Cleanup check files on 192.168.31.79:11122 ... Done Node Check Result Message ---- ----- ------ ------- 192.168.31.79 os-version Fail CentOS Linux 7 (Core) 7.9.2009 not supported, use version 9 or higher 192.168.31.79 cpu-cores Pass number of CPU cores / threads: 4 192.168.31.79 ntp Warn The NTPd daemon may be not start 192.168.31.79 disk Warn mount point /data does not have 'noatime' option set 192.168.31.79 selinux Pass SELinux is disabled 192.168.31.79 thp Pass THP is disabled 192.168.31.79 command Pass numactl: policy: default 192.168.31.79 cpu-governor Warn Unable to determine current CPU frequency governor policy 192.168.31.79 memory Pass memory size is 8192MB 192.168.31.79 network Pass network speed of ens192 is 10000MB 192.168.31.79 disk Fail multiple components tikv:/data/tidb-data/tikv-20160,tikv:/data/tidb-data/tikv-20161,tikv:/data/tidb-data/tikv-20162,tiflash:/data/tidb-data/tiflash-9000 are using the same partition 192.168.31.79:/data as data dir 192.168.31.79 disk Fail mount point /data does not have 'nodelalloc' option set

部署集群:

[root@test ~]# tiup cluster deploy lucifer v8.5.3 topo.yaml --user root -p Input SSH password: + Detect CPU Arch Name - Detecting node 192.168.31.79 Arch info ... Done + Detect CPU OS Name - Detecting node 192.168.31.79 OS info ... Done Please confirm your topology: Cluster type: tidb Cluster name: lucifer Cluster version: v8.5.3 Role Host Ports OS/Arch Directories ---- ---- ----- ------- ----------- pd 192.168.31.79 2379/2380 linux/x86_64 /data/tidb-deploy/pd-2379,/data/tidb-data/pd-2379 tikv 192.168.31.79 20160/20180 linux/x86_64 /data/tidb-deploy/tikv-20160,/data/tidb-data/tikv-20160 tikv 192.168.31.79 20161/20181 linux/x86_64 /data/tidb-deploy/tikv-20161,/data/tidb-data/tikv-20161 tikv 192.168.31.79 20162/20182 linux/x86_64 /data/tidb-deploy/tikv-20162,/data/tidb-data/tikv-20162 tidb 192.168.31.79 4000/10080 linux/x86_64 /data/tidb-deploy/tidb-4000 tiflash 192.168.31.79 9000/3930/20170/20292/8234/8123 linux/x86_64 /data/tidb-deploy/tiflash-9000,/data/tidb-data/tiflash-9000 prometheus 192.168.31.79 9090/12020 linux/x86_64 /data/tidb-deploy/prometheus-9090,/data/tidb-data/prometheus-9090 grafana 192.168.31.79 3000 linux/x86_64 /data/tidb-deploy/grafana-3000 Attention: 1. If the topology is not what you expected, check your yaml file. 2. Please confirm there is no port/directory conflicts in same host. Do you want to continue? [y/N]: (default=N) y + Generate SSH keys ... Done + Download TiDB components - Download pd:v8.5.3 (linux/amd64) ... Done - Download tikv:v8.5.3 (linux/amd64) ... Done - Download tidb:v8.5.3 (linux/amd64) ... Done - Download tiflash:v8.5.3 (linux/amd64) ... Done - Download prometheus:v8.5.3 (linux/amd64) ... Done - Download grafana:v8.5.3 (linux/amd64) ... Done - Download node_exporter: (linux/amd64) ... Done - Download blackbox_exporter: (linux/amd64) ... Done + Initialize target host environments - Prepare 192.168.31.79:11122 ... Done + Deploy TiDB instance - Copy pd -> 192.168.31.79 ... Done - Copy tikv -> 192.168.31.79 ... Done - Copy tikv -> 192.168.31.79 ... Done - Copy tikv -> 192.168.31.79 ... Done - Copy tidb -> 192.168.31.79 ... Done - Copy tiflash -> 192.168.31.79 ... Done - Copy prometheus -> 192.168.31.79 ... Done - Copy grafana -> 192.168.31.79 ... Done - Deploy node_exporter -> 192.168.31.79 ... Done - Deploy blackbox_exporter -> 192.168.31.79 ... Done + Copy certificate to remote host + Init instance configs - Generate config pd -> 192.168.31.79:2379 ... Done - Generate config tikv -> 192.168.31.79:20160 ... Done - Generate config tikv -> 192.168.31.79:20161 ... Done - Generate config tikv -> 192.168.31.79:20162 ... Done - Generate config tidb -> 192.168.31.79:4000 ... Done - Generate config tiflash -> 192.168.31.79:9000 ... Done - Generate config prometheus -> 192.168.31.79:9090 ... Done - Generate config grafana -> 192.168.31.79:3000 ... Done + Init monitor configs - Generate config node_exporter -> 192.168.31.79 ... Done - Generate config blackbox_exporter -> 192.168.31.79 ... Done Enabling component pd Enabling instance 192.168.31.79:2379 Enable instance 192.168.31.79:2379 success Enabling component tikv Enabling instance 192.168.31.79:20162 Enabling instance 192.168.31.79:20160 Enabling instance 192.168.31.79:20161 Enable instance 192.168.31.79:20162 success Enable instance 192.168.31.79:20161 success Enable instance 192.168.31.79:20160 success Enabling component tidb Enabling instance 192.168.31.79:4000 Enable instance 192.168.31.79:4000 success Enabling component tiflash Enabling instance 192.168.31.79:9000 Enable instance 192.168.31.79:9000 success Enabling component prometheus Enabling instance 192.168.31.79:9090 Enable instance 192.168.31.79:9090 success Enabling component grafana Enabling instance 192.168.31.79:3000 Enable instance 192.168.31.79:3000 success Enabling component node_exporter Enabling instance 192.168.31.79 Enable 192.168.31.79 success Enabling component blackbox_exporter Enabling instance 192.168.31.79 Enable 192.168.31.79 success Cluster `lucifer` deployed successfully, you can start it with command: `tiup cluster start lucifer --init`

启动集群:

[root@test ~]# tiup cluster start lucifer --init Starting cluster lucifer... + [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/lucifer/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/lucifer/ssh/id_rsa.pub + [Parallel] - UserSSH: user=tidb, host=192.168.31.79 + [Parallel] - UserSSH: user=tidb, host=192.168.31.79 + [Parallel] - UserSSH: user=tidb, host=192.168.31.79 + [Parallel] - UserSSH: user=tidb, host=192.168.31.79 + [Parallel] - UserSSH: user=tidb, host=192.168.31.79 + [Parallel] - UserSSH: user=tidb, host=192.168.31.79 + [Parallel] - UserSSH: user=tidb, host=192.168.31.79 + [Parallel] - UserSSH: user=tidb, host=192.168.31.79 + [ Serial ] - StartCluster Starting component pd Starting instance 192.168.31.79:2379 Start instance 192.168.31.79:2379 success Starting component tikv Starting instance 192.168.31.79:20162 Starting instance 192.168.31.79:20160 Starting instance 192.168.31.79:20161 Start instance 192.168.31.79:20162 success Start instance 192.168.31.79:20161 success Start instance 192.168.31.79:20160 success Starting component tidb Starting instance 192.168.31.79:4000 Start instance 192.168.31.79:4000 success Starting component tiflash Starting instance 192.168.31.79:9000 Start instance 192.168.31.79:9000 success Starting component prometheus Starting instance 192.168.31.79:9090 Start instance 192.168.31.79:9090 success Starting component grafana Starting instance 192.168.31.79:3000 Start instance 192.168.31.79:3000 success Starting component node_exporter Starting instance 192.168.31.79 Start 192.168.31.79 success Starting component blackbox_exporter Starting instance 192.168.31.79 Start 192.168.31.79 success + [ Serial ] - UpdateTopology: cluster=lucifer Started cluster `lucifer` successfully The root password of TiDB database has been changed. The new password is: 'm+92G0Q3eNR4^6cq*@'. Copy and record it to somewhere safe, it is only displayed once, and will not be stored. The generated password can NOT be get and shown again.

查看集群:

[root@test ~]# tiup cluster list Name User Version Path PrivateKey ---- ---- ------- ---- ---------- lucifer tidb v8.5.3 /root/.tiup/storage/cluster/clusters/lucifer /root/.tiup/storage/cluster/clusters/lucifer/ssh/id_rsa

检查集群状态:

[root@test ~]# tiup cluster display lucifer Cluster type: tidb Cluster name: lucifer Cluster version: v8.5.3 Deploy user: tidb SSH type: builtin Dashboard URL: http://192.168.31.79:2379/dashboard Dashboard URLs: http://192.168.31.79:2379/dashboard Grafana URL: http://192.168.31.79:3000 ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ---------- 192.168.31.79:3000 grafana 192.168.31.79 3000 linux/x86_64 Up - /data/tidb-deploy/grafana-3000 192.168.31.79:2379 pd 192.168.31.79 2379/2380 linux/x86_64 Up|L|UI /data/tidb-data/pd-2379 /data/tidb-deploy/pd-2379 192.168.31.79:9090 prometheus 192.168.31.79 9090/12020 linux/x86_64 Up /data/tidb-data/prometheus-9090 /data/tidb-deploy/prometheus-9090 192.168.31.79:4000 tidb 192.168.31.79 4000/10080 linux/x86_64 Up - /data/tidb-deploy/tidb-4000 192.168.31.79:9000 tiflash 192.168.31.79 9000/3930/20170/20292/8234/8123 linux/x86_64 Up /data/tidb-data/tiflash-9000 /data/tidb-deploy/tiflash-9000 192.168.31.79:20160 tikv 192.168.31.79 20160/20180 linux/x86_64 Up /data/tidb-data/tikv-20160 /data/tidb-deploy/tikv-20160 192.168.31.79:20161 tikv 192.168.31.79 20161/20181 linux/x86_64 Up /data/tidb-data/tikv-20161 /data/tidb-deploy/tikv-20161 192.168.31.79:20162 tikv 192.168.31.79 20162/20182 linux/x86_64 Up /data/tidb-data/tikv-20162 /data/tidb-deploy/tikv-20162 Total nodes: 8

安装 MySQL 客户端

TiDB 兼容 MySQL 协议,故需要 MySQL 客户端连接,则需安装 MySQL 客户端,Linux7 版本的系统默认自带安装了 MariaDB,需要先清理:

[root@test ~]# rpm -e --nodeps $(rpm -qa | grep mariadb)

找个有网的环境下载:

[root@lucifer ~]# wget https://repo.mysql.com/RPM-GPG-KEY-mysql-2023 [root@lucifer ~]# wget http://dev.mysql.com/get/mysql80-community-release-el7-10.noarch.rpm

安装 MySQL 客户端:

[root@test ~]# yum -y install mysql80-community-release-el7-10.noarch.rpm [root@test ~]# rpm --import RPM-GPG-KEY-mysql-2023 [root@test ~]# yum -y install mysql

连接数据库:

## 这里的 root 初始密码在 tidb 集群初始化时日志中输出的密码 m+92G0Q3eNR4^6cq*@ [root@test ~]# mysql -h 192.168.31.79 -P 4000 -uroot –p mysql> show databases;

修改初始 root 密码:

mysql> use mysql mysql> alter user 'root'@'%' identified by 'tidb';

集群监控:

  • Dashboard:http://192.168.31.79:2379/dashboard (使用 root/tidb 登录)
  • Grafana:http://192.168.31.79:3000 (默认密码:admin/admin)

写在最后

至此,TiDB 单机集群部署完成,可用于开发测试和学习研究。生产环境建议参考官方推荐的多机部署方案。

最后修改时间:2025-08-27 09:52:17
「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论