
Redis 主从架构。
这种架构,主节点可以读写,从节点只能读。当主节点挂掉后,整个 redis 服务就不可用了。
哨兵。
哨兵可以解决主从架构中的单点故障问题。就是添加的 sentinel1 、sentinel2、sentinel3文件映射。当主节点出现故障时,其中的从节点会成为主节点。
version: '3.1'services:redis1:image: daocloud.io/library/redis:5.0.7restart: alwayscontainer_name: redis1environment:- TZ=Asia/Shanghaiports:- 7001:6379volumes:- ./conf/redis1.conf:/usr/local/redis/redis.conf- ./conf/sentinel1.conf:/data/sentinel.confcommand: ["redis-server","/usr/local/redis/redis.conf"]redis2:image: daocloud.io/library/redis:5.0.7restart: alwayscontainer_name: redis2environment:- TZ=Asia/Shanghaiports:- 7002:6379volumes:- ./conf/redis2.conf:/usr/local/redis/redis.conf- ./conf/sentinel2.conf:/data/sentinel.conflinks:- redis1:mastercommand: ["redis-server","/usr/local/redis/redis.conf"]redis3:image: daocloud.io/library/redis:5.0.7restart: alwayscontainer_name: redis3environment:- TZ=Asia/Shanghaiports:- 7003:6379volumes:- ./conf/redis3.conf:/usr/local/redis/redis.conf- ./conf/sentinel3.conf:/data/sentinel.conflinks:- redis1:mastercommand: ["redis-server","/usr/local/redis/redis.conf"]
mkdir conf// 创建多个文件touch redis1.conf redis2.conf redis3.conf sentinel1.conf sentinel2.conf sentinel3.conf// redis2.conf 和 redis3.conf 从节点写入replicaof master 6379// sentinel1.conf 写入# 哨兵需要后台启动daemonize yes# 指定Master节点的ip和端口(主)sentinel monitor master localhost 6379 2# 指定Master节点的ip和端口(从)# sentinel monitor master master 6379 2# 哨兵每隔多久监听一次redis架构sentinel down-after-milliseconds master 10000// sentinel2.conf sentinel3.conf 写入# 哨兵需要后台启动daemonize yes# 指定Master节点的ip和端口(主)# sentinel monitor master localhost 6379 2# 指定Master节点的ip和端口(从)sentinel monitor master master 6379 2# 哨兵每隔多久监听一次redis架构sentinel down-after-milliseconds master 10000// 启动 dockerdocker-compose up -d// 进入3个 redis 容器内部,启动 sentineldocker exec -it 39 bashredis-sentinel sentinel.conf// 测试,关闭主节点 哨兵是否起作用docker stop 39// 进入另两个 redis 容器内部docker exec -it a8 bashredis-cli// 查看redis 信息,其中一个从节点的 role:master 变为主节点即为成功info
Redis 集群。
集群在保证主从加哨兵的基本功能之外,还能够提升Redis存储数据的能力。
集群是无中心的,节点数量必须是 2n + 1。
cd /optmkdir docker_redis_jqcd docker_redis_jqvi docker-compose.yml
version: "3.1"services:redis1:image: daocloud.io/library/redis:5.0.7restart: alwayscontainer_name: redis1environment:- TZ=Asia/Shanghaiports:- 7001:7001- 17001:17001volumes:- ./conf/redis1.conf:/usr/local/redis/redis.confcommand: ["redis-server","/usr/local/redis/redis.conf"]redis2:image: daocloud.io/library/redis:5.0.7restart: alwayscontainer_name: redis2environment:- TZ=Asia/Shanghaiports:- 7002:7002- 17002:17002volumes:- ./conf/redis2.conf:/usr/local/redis/redis.confcommand: ["redis-server","/usr/local/redis/redis.conf"]redis3:image: daocloud.io/library/redis:5.0.7restart: alwayscontainer_name: redis3environment:- TZ=Asia/Shanghaiports:- 7003:7003- 17003:17003volumes:- ./conf/redis3.conf:/usr/local/redis/redis.confcommand: ["redis-server","/usr/local/redis/redis.conf"]redis4:image: daocloud.io/library/redis:5.0.7restart: alwayscontainer_name: redis4environment:- TZ=Asia/Shanghaiports:- 7004:7004- 17004:17004volumes:- ./conf/redis4.conf:/usr/local/redis/redis.confcommand: ["redis-server","/usr/local/redis/redis.conf"]redis5:image: daocloud.io/library/redis:5.0.7restart: alwayscontainer_name: redis5environment:- TZ=Asia/Shanghaiports:- 7005:7005- 17005:17005volumes:- ./conf/redis5.conf:/usr/local/redis/redis.confcommand: ["redis-server","/usr/local/redis/redis.conf"]redis6:image: daocloud.io/library/redis:5.0.7restart: alwayscontainer_name: redis6environment:- TZ=Asia/Shanghaiports:- 7006:7006- 17006:17006volumes:- ./conf/redis6.conf:/usr/local/redis/redis.confcommand: ["redis-server","/usr/local/redis/redis.conf"]
mkdir confcd conftouch redis1.conf redis2.conf redis3.conf redis4.conf redis5.conf redis6.confvi redis1.conf// redis1.conf 写入# 指定redis的端口号port 7001# 开启Redis集群cluster-enabled yes# 集群信息的文件cluster-config-file nodes-7001.conf# 集群的对外ip地址, 就是虚拟机 ipcluster-announce-ip 10.36.144.110# 集群的对外portcluster-announce-port 7001# 集群的总线端口cluster-announce-bus-port 17001//vi redis2.conf// redis2.conf 写入,其他文件依次类推# 指定redis的端口号port 7002# 开启Redis集群cluster-enabled yes# 集群信息的文件cluster-config-file nodes-7002.conf# 集群的对外ip地址, 就是虚拟机 ipcluster-announce-ip 10.36.144.110# 集群的对外portcluster-announce-port 7002# 集群的总线端口cluster-announce-bus-port 17002// 然后随便进入一个 redis 容器内部,启动集群配置,注意要先启动 docker 管理的 redis,启动当前 docker 时,要先关闭其他有影响的 docker。redis-cli --cluster create 10.36.144.110:7001 10.36.144.110:7002 10.36.144.110:7003 10.36.144.110:7004 10.36.144.110:7005 10.36.144.110:7006 --cluster-replicas 1// 启动编辑,如果启动失败,需要先 exit 再执行当前命令。redis-cli -h 10.36.144.110 -p 7001 -c// 测试,如果多次设置值随机分配存储节点,说明集群有效set a 1
Java 连接 Redis 集群。
@Testpublic void t9() {Set<HostAndPort> nodes = new HashSet<>();nodes.add(new HostAndPort("10.36.144.128",7001));nodes.add(new HostAndPort("10.36.144.128",7002));nodes.add(new HostAndPort("10.36.144.128",7003));nodes.add(new HostAndPort("10.36.144.128",7004));nodes.add(new HostAndPort("10.36.144.128",7005));nodes.add(new HostAndPort("10.36.144.128",7006));JedisCluster jedisCluster = new JedisCluster(nodes);String s = jedisCluster.set("name", "zs");System.out.println(s);String res = jedisCluster.get("a");System.out.println(res);}
ElasticSearch。
是一个使用 Java 语言并且基于 Lucene 编写的搜索引擎框架,提供了分布式的全文搜索功能,提供了统一的基于 RESTful 风格的 WEB 接口。
Lucene 本身就是一个搜索引擎的底层。
安装 ElasticSearch。虚拟机需要至少 4G 内存。
需要安装ES、Kibana 和 IK 分词器。
// 虚拟机分配内存vi /etc/sysctl.conf// sysctl.conf 写入vm.max_map_count = 665600// 使文件生效sysctl -pcd /optmkdir docker_escd docker_es// 查 ipip a | grep ens33// docker 配置,ip需要改为自己虚拟机 ipvi docker-compose.yml
version: "3.1"services:elasticsearch:image: daocloud.io/library/elasticsearch:6.5.4restart: alwayscontainer_name: elasticsearchports:- 9200:9200kibana:image: daocloud.io/library/kibana:6.5.4restart: alwayscontainer_name: kibanaports:- 5601:5601environment:- elasticsearch_url=http://10.36.144.128:9200depends_on:- elasticsearch
docker-compose up -d// 进入 es 容器内部docker exec -it 84 bashcd bin// 下载 IK 分词器./elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.5.4/elasticsearch-analysis-ik-6.5.4.zip//退出 并 重启 ES 容器,让 IK 分词器生效exitdocker-compose restart
测试。浏览器输入 http://10.36.144.128:5601/ 进入kibana 图形化操作 es 页面,选择 Dev Tools 添加下面 json,选中执行。注意 { 格式要另起一行。
POST _analyze{"analyzer": "ik_max_word","text": "玛尔扎哈 迪丽热巴"}
文章转载自java小小小小栈,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。




