暂无图片
暂无图片
暂无图片
暂无图片
暂无图片

ClickHouse实战-高可用集群安装

大数据小黑屋 2020-12-25
859

作者:

TomBombadil-大数据工程师,Contributor of ClickHouse


ClickHouse是目前大数据领域炙手可热的MPP架构数据库系统。其优异的性能表现吸引了众多公司的青睐。本篇基于Ansible这款强大的运维工具,介绍快速搭建生产可用的ClickHouse集群的详细步骤。


组件介绍:

  • ClickHouse-基于MPP架构的OLAP列存数据库系统

  • Apache Zookeeper-分布式系统服务协调组件

  • Ansible-基于Python开发的运维工具,支持对集群节点进行批量运维


1.1. 服务节点

  • ClickHouse集群采用4节点,可以支持创建双副本(Replica)*双分片(Shard)的逻辑集群,也可以创建4副本*单一分片的逻辑集群。

  • 所有节点均安装CentOS7 系统,防火墙均设置成关闭状态。

# Ansible运维主控机

192.168.1.200



# ClickHouse服务节点
192.168.1.201
192.168.1.202
192.168.1.203
192.168.1.204

# zk集群节点
192.168.1.211
192.168.1.212
192.168.1.213
192.168.1.214
192.168.1.215

1.2. ansible环境准备

  • 在Ansible运维主控机预装好Ansible工具

  • 将admin账号的pub-key发送到ClickHouse集群及zk集群节点的authorized_keys文件中,实现ssh免密登录

  • ansible配置

$ cat /etc/ansible/hosts

192.168.1.200  ansible_ssh_user=admin 
192.168.1.211 ansible_ssh_user=admin
192.168.1.212 ansible_ssh_user=admin
192.168.1.213 ansible_ssh_user=admin
192.168.1.201 ansible_ssh_user=admin
192.168.1.202 ansible_ssh_user=admin
192.168.1.203 ansible_ssh_user=admin
192.168.1.204 ansible_ssh_user=admin
192.168.1.214 ansible_ssh_user=admin
192.168.1.215 ansible_ssh_user=admin

# ansible主控node
[this]
192.168.1.200

# clickhouse-cluster
[ch-all]
192.168.1.201
192.168.1.202
192.168.1.203
192.168.1.204

# zookeeper-cluster
[zk-all]
192.168.1.211
192.168.1.212
192.168.1.213
192.168.1.214
192.168.1.215

  • 测试ansible环境

$ ansible ch-all -m shell -a 'hostname' -o

192.168.1.203 | CHANGED | rc=0 | (stdout) localhost
192.168.1.201 | CHANGED | rc=0 | (stdout) localhost
192.168.1.202 | CHANGED | rc=0 | (stdout) localhost
192.168.1.204 | CHANGED | rc=0 | (stdout) localhost

1.3. Ansible远程安装Zookeeper

1.3.1. zk集群配置文件

  • /home/admin目录下创建zk_conf文件夹

$ ll zk_conf
total 8
-rw-r--r-- 1 admin admin 2715 Oct 1 20:35 log4j.properties
-rw-rw-r-- 1 admin admin 323 Oct 1 20:35 zoo.cfg

# zk集群配置
$ cat zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
clientPort=2181
dataDir=/opt/apache-zookeeper-3.5.8/data
dataLogDir=/opt/apache-zookeeper-3.5.8/dataLog
server.1=192.168.1.211:2888:3888
server.2=192.168.1.212:2888:3888
server.3=192.168.1.213:2888:3888
server.4=192.168.1.214:2888:3888
server.5=192.168.1.215:2888:3888

# 日志配置
$ cat log4j.properties
... ...
zookeeper.root.logger=INFO,ROLLINGFILE

zookeeper.console.threshold=INFO

zookeeper.log.dir=.
zookeeper.log.file=zookeeper.log
zookeeper.log.threshold=INFO
zookeeper.log.maxfilesize=256MB
zookeeper.log.maxbackupindex=20
... ...


1.3.2. 安装zk集群

# 下载zookeeper安装包到Ansible主控机
$ pwd
/home/admin
$ ll |grep apache-zookeeper
-rw-rw-r-- 1 admin admin 9394700 May 11 18:10 apache-zookeeper-3.5.8-bin.tar.gz

#
通过ansible下发安装文件
$ ansible zk-all -m copy -a 'src=/home/admin/apache-zookeeper-3.5.8-bin.tar.gz dest=/home/admin/apache-zookeeper-3.5.8-bin.tar.gz'
# 解压安装zookeeper
$ ansible zk-all -m shell -a 'sudo tar -zxvf /home/admin/apache-zookeeper-3.5.8-bin.tar.gz -C /opt/'

#
zookeeper目录重命名
$ ansible zk-all -m shell -a 'sudo mv /opt/apache-zookeeper-3.5.8-bin /opt/apache-zookeeper-3.5.8'
# 创建data,dataLog目录
$ ansible zk-all -m shell -a 'sudo mkdir /opt/apache-zookeeper-3.5.8/{data,dataLog}'
# 备份自带的log4j.properties 文件
$ ansible zk-all -m shell -a 'sudo mv /opt/apache-zookeeper-3.5.8/conf/log4j.properties /opt/apache-zookeeper-3.5.8/conf/log4j.properties.bak'
# 下发zoo.cfglog4j.properties到集群节点
$ ansible zk-all -m copy -a 'src=/home/admin/zk_conf/zoo.cfg dest=/home/admin/zk_conf/zoo.cfg'
$ ansible zk-all -m copy -a 'src=/home/admin/zk_conf/log4j.properties dest=/home/admin/zk_conf/log4j.properties'

#
拷贝zoo.cfglog4j.propertieszk目录
$ ansible zk-all -m shell -a 'sudo mv /home/admin/zk_conf/zoo.cfg /opt/apache-zookeeper-3.5.8/conf/zoo.cfg'
$ ansible zk-all -m shell -a 'sudo mv /home/admin/zk_conf/log4j.properties /opt/apache-zookeeper-3.5.8/conf/log4j.properties'

1.3.3. zk集群节点标识配置

  • /opt/apache-zookeeper-3.5.8/data/myid 文件配置

$ ansible zk-all -m shell -a 'cat /opt/apache-zookeeper-3.5.8/data/myid'
192.168.1.215 | CHANGED | rc=0 >>
5
192.168.1.213 | CHANGED | rc=0 >>
3
192.168.1.214 | CHANGED | rc=0 >>
4
192.168.1.211 | CHANGED | rc=0 >>
1
192.168.1.212 | CHANGED | rc=0 >>
2

1.3.4. 启动/停止zk集群

# 集群启动
$ ansible zk-all -m shell -a "/opt/apache-zookeeper-3.5.8/bin/zkServer.sh start"
# 集群停止
$ ansible zk-all -m shell -a "/opt/apache-zookeeper-3.5.8/bin/zkServer.sh stop"
# 集群节点状态查看
$ ansible zk-all -m shell -a "/opt/apache-zookeeper-3.5.8/bin/zkServer.sh status"

1.4. ClickHouse安装

1.4.1. 下载安装包

  • 下载地址

  • 所需文件

clickhouse-client-20.3.12.112-2.noarch.rpm
clickhouse-common-static-20.3.12.112-2.x86_64.rpm
clickhouse-server-20.3.12.112-2.noarch.rpm

1.4.2. Ansible远程安装ClickHouse

  • 安装

# 创建远程节点rpm包存放目录
$ ansible ch-all -m shell -a 'mkdir /home/admin/rpm/' -o

#
下发安装包到clickhouse集群节点
$ ansible ch-all -m copy -a 'src=/home/admin/rpm/clickhouse-client-20.3.12.112-2.noarch.rpm dest=/home/admin/rpm/clickhouse-client-20.3.12.112-2.noarch.rpm' -o
$ ansible ch-all -m copy -a 'src=/home/admin/rpm/clickhouse-common-static-20.3.12.112-2.x86_64.rpm dest=/home/admin/rpm/clickhouse-common-static-20.3.12.112-2.x86_64.rpm' -o
ansible ch-all -m copy -a 'src=/home/admin/rpm/clickhouse-server-20.3.12.112-2.noarch.rpm dest=/home/admin/rpm/clickhouse-server-20.3.12.112-2.noarch.rpm' -o

#
远程执行rpm安装命令
$ ansible ch-all -m shell -a 'sudo rpm -ivh /home/admin/rpm/clickhouse*.rpm --nodeps' -o

#
启动ClickHouse服务
$ ansible ch-all -m shell -a 'sudo service clickhouse-server start' -o

  • 检查ClickHouse服务状态

$ ansible ch-all -m shell -a 'sudo service clickhouse-server status' -o

192.168.1.203 | CHANGED | rc=0 | (stdout) clickhouse-server service is running
192.168.1.201 | CHANGED | rc=0 | (stdout) clickhouse-server service is running
192.168.1.204 | CHANGED | rc=0 | (stdout) clickhouse-server service is running
192.168.1.202 | CHANGED | rc=0 | (stdout) clickhouse-server service is running

1.5. ClickHouse集群配置

  • 集群配置文件:/etc/clickhouse-server/config.d/metrika.xml

    • $ cat metrika.xml

<?xml version="1.0"?>
<yandex>
<zookeeper-servers>
<!-- ZooKeeper配置,名称自定义 -->
<!--zk节点,可以设置多个node,用index区分 -->
<node index="1">
<host>192.168.1.211</host>
<port>2181</port>
</node>
<node index="2">
<host>192.168.1.212</host>
<port>2181</port>
</node>
<node index="3">
<host>192.168.1.213</host>
<port>2181</port>
</node>
<node index="4">
<host>192.168.1.214</host>
<port>2181</port>
</node>
<node index="5">
<host>192.168.1.215</host>
<port>2181</port>
</node>
</zookeeper-servers>

<clickhouse_remote_servers>
<!-- 2 shards with 2 replicas -->
<cluster_two_shards>
<shard>
<replica>
<host>192.168.1.201</host>
<port>9000</port>
<user>default</user>
<password>xxxxxxx</password>
</replica>
<replica>
<host>192.168.1.203</host>
<port>9000</port>
<user>default</user>
<password>xxxxxxx</password>
</replica>
</shard>
<shard>
<replica>
<host>192.168.1.202</host>
<port>9000</port>
<user>default</user>
<password>xxxxxxx</password>
</replica>
<replica>
<host>192.168.1.204</host>
<port>9000</port>
<user>default</user>
<password>xxxxxxx</password>
</replica>
</shard>
</cluster_two_shards>
<!-- 1 shard with 4 replicas -->
<cluster_4replica>
<shard>
<replica>
<host>192.168.1.201</host>
<port>9replicaort>
<user>haha</user>
<password>xxxxxxx</password>
</replica>
<replica>
<host>192.168.1.203</host>
<port>9000</port>
<user>haha</user>
<password>xxxxxxx</password>
</replica>
<replica>
<host>192.168.1.202</host>
<port>9000</port>
<user>haha</user>
<password>xxxxxxx</password>
</replica>
<replica>
<host>192.168.1.204</host>
<port>9000</port>
<user>haha</user>
<password>xxxxxxx</password>
</replica>
</shard>
</cluster_4replica>
</clickhouse_remote_servers>
</yandex>
replica- 各节点配置
- $ cat /etc/clickhouse-server/config.xml
```xml
<?xml version="1.0"?>
<!--
NOTE: User and query level settings are set up in "users.xml" file.
-->

<yandex>
<logger>
<!-- Possible levels: https://github.com/pocoproject/poco/blob/poco-1.9.4-release/Foundation/include/Poco/Logger.h#L105 -->
<level>warning</level>
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
<size>256M</size>
<count>10</count>
<!-- <console>1</console> --> <!-- Default behavior is autodetection (log to console if not daemon mode and is tty) -->
</logger>
<!--display_name>production</display_name--> <!-- It is the name that will be shown in the client -->
<http_port>8123</http_port>
<tcp_port>9000</tcp_port>
<mysql_port>9004</mysql_port>
... ...
<!-- Port for communication between replicas. Used for data exchange. -->
<interserver_http_port>9009</interserver_http_port>
... ...
<!-- 各节点ip地址 -->
<interserver_http_host>192.168.1.201</interserver_http_host>
<!-- 监听所有地址请求 -->
<listen_host>0.0.0.0</listen_host>
... ...
<!-- 引用外部配置文件 -->
<include_from>/etc/clickhouse-server/config.d/metrika.xml</include_from>
<!-- 集群配置 -->
<remote_servers incl="clickhouse_remote_servers" />
<!-- zk集群配置 -->
<zookeeper incl="zookeeper-servers" optional="false" />
... ...
<!-- macro宏变量配置,replicated表引擎需要这些参数 -->
<!-- 各节点单独配置 -->
<macros>
<shard>1</shard>
<replica>1</replica>
</macros>
... ...
<!-- 字典表配置 -->
<dictionaries_config>*_dictionary.xml</dictionaries_config>
... ...

  • 用户权限配置

    • $ cat /etc/clickhouse-server/users.xml

<?xml version="1.0"?>
<yandex>
<!-- Profiles of settings. -->
<profiles>
<!-- Default settings. -->
<default>
<!-- Maximum memory usage for processing single query, in bytes. -->
<max_memory_usage>10000000000</max_memory_usage>
<use_uncompressed_cache>0</use_uncompressed_cache>
<load_balancing>random</load_balancing>
<log_queries>1</log_queries>
</default>

<!-- Profile that allows only read queries. -->
<readonly>
<log_queries>1</log_queries>
<readonly>1</readonly>
</readonly>
</profiles>

<!-- Users and ACL. -->
<users>
<haha>
<password_sha256_hex>XXXXXXXXXXXXXXXXXXX</password_sha256_hex>
<quota>default</quota>
<profile>default</profile>
<network incl="networks" replace="replace">
<ip>::/0</ip>
</network>
</haha>
<!-- If user name was not specified, 'default' user is used. -->
<heihei>
<password_sha256_hex>XXXXXXXXXXXXXXXXXXX</password_sha256_hex>
<networks incl="networks" replace="replace">
<ip>::/0</ip>
</networks>

<!-- Settings profile for user. -->
<profile>readonly</profile>

<!-- Quota for user. -->
<quota>default</quota>
</heihei>
</users>
<!-- Quotas. -->
<quotas>
<!-- Name of quota. -->
<default>
<!-- Limits for time interval. You could specify many intervals with different limits. -->
<interval>
<!-- Length of interval. -->
<duration>3600</duration>

<!-- No limits. Just calculate resource usage for time interval. -->
<queries>0</queries>
<errors>0</errors>
<result_rows>0</result_rows>
<read_rows>0</read_rows>
<execution_time>7200</execution_time>
</interval>
</default>
</quotas>
</yandex>


1.6. ClickHouse系统表

1.6.1. system.tables

  • 数据表

SELECT
database,
name,
engine,
data_paths
FROM tables
WHERE database = 'test_db'

┌─database─────────┬─name───────────────────────┬─engine───────────────────────┬─data_paths────────────────────────────────────────────────────────────────┐    
test_db ch_test_table Distributed ['/var/lib/clickhouse/data/test_db/ch_test_table/']
test_db ch_test_table_local ReplicatedReplacingMergeTree ['/var/lib/clickhouse/data/test_db/ch_test_table_local/']
└──────────────────┴────────────────────────────┴──────────────────────────────┴───────────────────────────────────────────────────────────────────────────┘

1.6.2. system.clusters

  • 集群信息

SELECT 
cluster,
shard_num,
replica_num,
host_name,
host_address,
is_local,
user,
default_database
FROM clusters

┌─cluster────────────────┬─shard_num─┬─replica_num─┬─host_name────┬─host_address─┬─is_local─┬─user────┬─default_database─┐
cluster_4replica 1 1 192.168.1.201 192.168.1.201 1 default
cluster_4replica 1 2 192.168.1.203 192.168.1.203 0 default
cluster_4replica 1 3 192.168.1.202 192.168.1.202 0 default
cluster_4replica 1 4 192.168.1.204 192.168.1.204 0 default
cluster_two_replica 1 1 192.168.1.201 192.168.1.201 1 default
cluster_two_replica 1 2 192.168.1.203 192.168.1.203 0 default
cluster_two_replica 2 1 192.168.1.202 192.168.1.202 0 default
cluster_two_replica 2 2 192.168.1.204 192.168.1.204 0 default
└────────────────────────┴───────────┴─────────────┴──────────────┴──────────────┴──────────┴─────────┴──────────────────┘

1.6.3. system.replicas

  • 副本信息

SELECT 
database,
table,
is_leader,
replica_name,
replica_path,
total_replicas
FROM replicas

┌─database─────────┬─table──────────────────────┬─is_leader─┬─replica_name─┬─replica_path────────────────────────────────────────────────────────────────┬─total_replicas─┐
test_db ch_test_table_local 1 1_1 /clickhouse/tables/test_db/ch_test_table_local/replicas/1_1 4
└──────────────────┴────────────────────────────┴───────────┴──────────────┴─────────────────────────────────────────────────────────────────────────────┴────────────────┘

1.6.4. system.metrics

  • 系统性能指标

SELECT 
metric,
value,
concat(substring(description, 1, 80), '...') AS desc
FROM metrics
WHERE metric LIKE '%Connection'

┌─metric────────────────┬─value─┬─desc────────────────────────────────────────────────────────────────────────────────┐
TCPConnection 3 Number of connections to TCP server (clients with native interface), also includ...
MySQLConnection 0 Number of client connections using MySQL protocol...
HTTPConnection 0 Number of connections to HTTP server...
InterserverConnection 0 Number of connections from other replicas to fetch parts...
└───────────────────────┴───────┴─────────────────────────────────────────────────────────────────────────────────────┘

1.6.5. system.processes

  • 当前进程

SELECT 
query_id,
user,
address,
query
FROM system.processes
ORDER BY query_id ASC


┌─query_id─────────────────────────────┬─user─────┬─address────────────┬─query─────────────────────────────────────────────────────────────────────────────┐
14652ef4-fc78-4825-9f30-f27aff088eb4 dp_write ::ffff:192.168.1.200 SELECT query_id, user, address, query FROM system.processes ORDER BY query_id ASC
└──────────────────────────────────────┴──────────┴────────────────────┴───────────────────────────────────────────────────────────────────────────────────┘

1.6.6. system.disks

  • 磁盘空间占用

SELECT 
name,
path,
formatReadableSize(free_space) AS free,
formatReadableSize(total_space) AS total,
formatReadableSize(keep_free_space) AS reserved
FROM system.disks

┌─name────┬─path─────────────────┬─free──────┬─total─────┬─reserved─┐
default /var/lib/clickhouse/ 31.37 GiB 49.99 GiB 0.00 B
└─────────┴──────────────────────┴───────────┴───────────┴──────────┘

1.6.7. system.parts

  • 数据块及空间占用

SELECT 
database,table,
formatReadableSize(sum(bytes_on_disk)) AS on_disk
FROM system.parts
GROUP BY database,table

┌─database─────────┬─on_disk────┐
system 135.19 MiB
test_db 1.83 GiB
└──────────────────┴────────────┘

1.6.8. system.query_log

  • 查询日志

SELECT 
user,
client_hostname AS host,
client_name AS client,
formatDateTime(query_start_time, '%T') AS started,
query_duration_ms / 1000 AS sec,
round(memory_usage / 1048576) AS MEM_MB,
result_rows AS RES_CNT,
result_bytes / 1048576 AS RES_MB,
read_rows AS R_CNT,
round(read_bytes / 1048576) AS R_MB,
written_rows AS W_CNT,
round(written_bytes / 1048576) AS W_MB,
concat(subString(query,1,80),'...') query_str
FROM system.query_log
WHERE type = 2
ORDER BY query_duration_ms DESC
LIMIT 10

┌─user────┬─host──────────────────────┬─client────────────┬─started──┬───sec─┬─MEM_MB─┬─RES_CNT─┬────────────────RES_MB─┬─R_CNT─┬─R_MB─┬─W_CNT─┬─W_MB─┬─query_str────────────────────────────────┐
default localhost ClickHouse client 14:49:08 0.001 0 0 0 46 0 0 0 SHOW TABLES FROM system...
default localhost ClickHouse client 14:49:48 0.001 0 0 0 1 0 0 0 SELECT count(*) FROM system.query_log...
default localhost ClickHouse client 14:49:37 0 0 42 0.0033664703369140625 42 0 0 0 DESCRIBE TABLE system.query_log...
└─────────┴───────────────────────────┴───────────────────┴──────────┴───────┴────────┴─────────┴───────────────────────┴───────┴──────┴───────┴──────┴──────────────────────────────────────────┘

1.7. 集群监控

1.7.1. Ansible监控clickhouse集群

  • 获取查询日志

$ ansible ch-all -m shell -a "echo 'select user,client_hostname,result_rows,query_start_time,memory_usage FROM system.query_log order by query_start_time desc limit 2' | curl 'http://localhost:8123/?user=haha&password=1234' -d @-" -o

192.168.1.202 | CHANGED | rc=0 >>
dp_write 0 2020-10-10 15:37:16 0
dp_write 0 2020-10-10 15:37:16 0 % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 198 0 68 100 130 5533 10578 --:--:-- --:--:-- --:--:-- 10833
192.168.1.201 | CHANGED | rc=0 >>
dp_write 0 2020-10-10 15:37:16 0
dp_write 0 2020-10-10 15:37:16 0 % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 198 0 68 100 130 5897 11273 --:--:-- --:--:-- --:--:-- 11818
192.168.1.204 | CHANGED | rc=0 >>
dp_write 0 2020-10-10 15:37:16 0
dp_write 0 2020-10-10 15:37:16 0 % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 198 0 68 100 130 2339 4471 --:--:-- --:--:-- --:--:-- 4642
192.168.1.203 | CHANGED | rc=0 >>
dp_write 0 2020-10-10 15:37:17 0
dp_write 0 2020-10-10 15:37:17 0 % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 198 0 68 100 130 135 259 --:--:-- --:--:-- --:--:-- 259

  • 获取磁盘空间占用信息

$ ansible ch-all -m shell -a "echo 'SELECT     database,    formatReadableSize(sum(bytes_on_disk)) AS on_disk FROM system.parts GROUP BY database' | curl 'http://localhost:8123/?user=haha&password=1234' -d @-"

192.168.1.201 | CHANGED | rc=0 >>
system 93.45 MiB
test_db 683.06 MiB % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 154 0 45 100 109 134 325 --:--:-- --:--:-- --:--:-- 325
192.168.1.204 | CHANGED | rc=0 >>
system 201.70 MiB
test_db 680.65 MiB % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 155 0 46 100 109 163 386 --:--:-- --:--:-- --:--:-- 386
192.168.1.202 | CHANGED | rc=0 >>
system 78.31 MiB
test_db 683.06 MiB % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 154 0 45 100 109 134 326 --:--:-- --:--:-- --:--:-- 326
192.168.1.203 | CHANGED | rc=0 >>
system 65.55 MiB
test_db 681.64 MiB % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 154 0 45 100 109 87 211 --:--:-- --:--:-- --:--:-- 211

1.8 集群升级

1.8.1 注意事项

  • 集群升级请在业务低谷期进行。

  • 升级期间请关闭所有连接服务端的ClickHouse-client进程,否则容易造成升级异常。

  • 采用分批升级方式进行。生产环境中集群表一般存在多副本,每次升级其中的一部分副本所在的节点,确认无误后继续升级其它的副本节点。

  • 升级后第一时间查看服务日志;如有异常,可依次尝试重启服务进程/重启主机。

  • 升级后检查本地表读写操作是否正常,集群表基本操作是否正常。

1.8.2 升级步骤

  • 准备升级所需的最新版rpm包;下载地址

https://repo.yandex.ru/clickhouse/rpm/stable/x86_64/clickhouse-client-20.8.8.2-2.noarch.rpm
https://repo.yandex.ru/clickhouse/rpm/stable/x86_64/clickhouse-common-static-20.8.8.2-2.x86_64.rpm
https://repo.yandex.ru/clickhouse/rpm/stable/x86_64/clickhouse-server-20.8.8.2-2.noarch.rpm

  • rpm升级包分发到集群各节点

  • 逐批次升级

$ ansible 192.168.87.151,192.168.87.152 -m shell -a 'sudo service clickhouse-server stop & sudo rpm -Uvh /home/bhc_admin/rpm/clickhouse/clickhouse-common-static-20.8.6.6-2.x86_64.rpm --nodeps'
$ ansible 192.168.87.151,192.168.87.152 -m shell -a 'sudo service clickhouse-server stop & sudo rpm -Uvh /home/bhc_admin/rpm/clickhouse/clickhouse-server-20.8.6.6-2.x86_64.rpm --nodeps'
$ ansible 192.168.87.151,192.168.87.152 -m shell -a 'sudo service clickhouse-server stop & sudo rpm -Uvh /home/bhc_admin/rpm/clickhouse/clickhouse-client-20.8.6.6-2.x86_64.rpm --nodeps'

  • 安装完后,启动clickhouse服务,检查服务及数据是否正常


文章转载自大数据小黑屋,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论