clickhouse集群版部署-clickhouse-keeper

\前言**

本文档为基于ClickHouse-Keeper搭建集群,和单机版本安装步骤差不多,差异在于在config.xml中增加一段配置,以及增加了metrika.xml配置文件,安装过程中需要在防火墙中多开几个端口号。以下为基于单机版本ClickHouse安装步骤增加了集群的内容,集群内容为标黄色字体。

多节点配置文件样例SVN地址:https://192.168.60.125/svn/非银绩效分析/06_实施/01_中信XT/部署/clickhouse/集群版本配置_clickhouse-keeper/

\clickhouse安装与启动**

操作步骤与用户 操作内容
创建用户/root 1.创建用户 useradd clickhouse
解压安装包/root 1. 上传安装包及单机配置文件到/home/clickhouseSvn路径:https://192.168.60.125/svn/非银绩效分析/06_实施/01_中信XT/部署/clickhouse1. 解压安装包 (clickhouse-client-22.3.6.5.tar.gz, clickhouse-common-static-22.3.6.5.tar.gz,clickhouse-server-22.3.6.5.tar.gz) cd /home/clickhouse tar -zxvf clickhouse-client-22.3.6.5-amd64.tgztar -zxvf clickhouse-common-static-22.3.6.5-amd64.tgztar -zxvf clickhouse-server-22.3.6.5-amd64.tgz
安装列式数据库/root 1.安装公共包 ./clickhouse-common-static-22.3.6.5/install/doinst.sh 2.安装服务端./clickhouse-server-22.3.6.5/install/doinst.shimg 输入 cebriskimg 输入 N img安装服务端成功 3. 安装客户端 ./clickhouse-client-22.3.6.5/install/doinst.sh
防火墙增加端口配置/root 1. 防火墙增加端口配置 firewall-cmd –add-port=8123/tcp –permanent firewall-cmd –add-port=9000/tcp –permanent firewall-cmd –add-port=9004/tcp –permanent 集群:下面是clickhouse-keeper集群配置需要放开的端口:firewall-cmd –add-port=9181/tcp –permanentfirewall-cmd –add-port=9234/tcp –permanentfirewall-cmd –add-port=9009/tcp –permanent 2. 重新加载防火墙配置 firewall-cmd –reload
创建数据挂载目录/root 1. 创建数据挂载目录 (根据实际硬盘挂载目录再调整) mkdir -p /大空间目录/ck1此处使用mkdir -p /data/clickhouse/ck1(实际情况调整) 2. 给目录赋权 chown clickhouse:clickhouse -R /大空间目录/ck1此处使用chown clickhouse:clickhouse -R /data/clickhouse/ck1查看storage.xml里配置的存储路径img 修改config.xml中关于本机ip等配置 clickhouse-keeper集群相关:在config.xml中增加集群配置:img 需要注意:1、集群中每个IP节点的config.xml中都需要增加这段配置,且每个文件中第53行server_id不能重复;2、从第65行开始,集群中有多少个节点就配置多少个server,比如样例中有三个节点,那么每个config.xml中65~87行是一样的。id和priority可以配自增的数字,IP改成对应的IP,端口号9234不用改;3、第95行要改成对应节点的IP地址;样例: <keeper_server> <tcp_port>9181</tcp_port> <server_id>1</server_id> <log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path> <snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path> <coordination_settings> <operation_timeout_ms>5000</operation_timeout_ms> <session_timeout_ms>10000</session_timeout_ms> <snapshot_distance>75</snapshot_distance> <raft_logs_level>trace</raft_logs_level> </coordination_settings> <raft_configuration> 1 192.168.88.67 9234 <can_become_leader>true</can_become_leader> 1 2 192.168.88.68 9234 <can_become_leader>true</can_become_leader> <start_as_follower>true</start_as_follower> 2 3 192.168.88.69 9234 <can_become_leader>true</can_become_leader> <start_as_follower>true</start_as_follower> 3 </raft_configuration> </keeper_server> <include_from>/etc/clickhouse-server/metrika.xml</include_from> <remote_servers incl=”clickhouse_remote_servers” /> <interserver_http_host>192.168.88.67</interserver_http_host> 增加集群配置文件metrika.xml:img 需要注意:1、上面和下面两段分别也要配置所有节点的IP,上面还需要配置数据库实际端口(通常是9000,如果更改了则配置修改后的)、数据库的用户和密码,密码也可以配置密文;2、下面那段IP中的端口9181不需要修改,中间第35行配所在节点的IP; 样例:<?xml version=”1.0”?> <clickhouse_remote_servers> <internal_replication>true</internal_replication> 192.168.88.67 9000 cebrisk cebrisk 192.168.88.68 9000 cebrisk cebrisk 192.168.88.69 9000 cebrisk cebrisk </clickhouse_remote_servers> node01 192.168.88.67 192.168.88.67 9181 192.168.88.68 9181 192.168.88.69 9181 <clickhouse_compression> <min_part_size>10000000000</min_part_size> <min_part_size_ratio>0.01</min_part_size_ratio> lz4 </clickhouse_compression> 2.1 其他关于大目录的修改点,见如下文件img 如果目录空间不够需要更换目录的话,执行以下(待定,命令尚不完善): img 3.复制配置文件(storage.xml需要先修改路径到大空间目录下) cp storage.xml /etc/clickhouse-server/config.d/storage.xml cp users.xml /etc/clickhouse-server/users.xmlcp config.xml /etc/clickhouse-server/config.xmlcp metrika.xml /etc/clickhouse-server/metrika.xml 如果提示是否强制替换?输入 y:是 4.配置目录属主赋予clickhouse用户 chown clickhouse:clickhouse -R /etc/clickhouse-serverchown clickhouse:clickhouse -R /etc/clickhouse-client/chown clickhouse:clickhouse -R /home/clickhouse/ck1/ 5. 注意看一下config.xml中的大文件路径是否已经修改(修改/etc/clickhouse-server/config.xml下的)clickhouse.logger.logclickhouse.logger.errorlogclickhouse.keeper_server.snapshot_storage_pathclickhouse.tmp_path
启动服务,同步数据/root 1.启动服务 clickhouse start 2.登录客户端 clickhouse-client -h 127.0.0.1 -u cebrisk –password cebrisk若遇到登录错误时可查看日志检查是否与以下命令有关,如果有关,则需执行以下命令openssl dhparam -out /etc/clickhouse-server/dhparam.pem 4096相关命令存在于config.xml中,可查看相关解决问题openssl req -subj “/CN=localhost” -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout /etc/clickhouse-server/server.key -out /etc/clickhouse-server/server.crt
clickhouse-backup安装 0. cd /home/clickhouse1. tar -xf clickhouse-backup.tar 2. cd clickhouse-backup3. cp clickhouse-backup /usr/local/bin4. 验证: clickhouse-backup -vimg 5. 配置目录属主赋予clickhouse用户 chown clickhouse:clickhouse -R /usr/local/bin/clickhouse-backup 6. 添加配置文件到/etc/clickhouse-backup/config.yml。如果没有则创建目录及文件。注意账号密码是否需要修改img general: remote_storage: none backups_to_keep_local: 30 backups_to_keep_remote: 31clickhouse: username: cebrisk password: cebrisk host: localhost port: 9000 data_path: “/var/log/clickhouse-server/data”
查看日志 tail -200f /var/log/clickhouse-server/clickhouse-server.log
导入数据库脚本 clickhouse-client -h 127.0.0.1 -u cebrisk –password cebrisk –multiquery < /home/clickhouse/20200627_clickhouse单库基础数据.sql** 注意到数据库脚本已经内部创建数据库,可根据实际修改img