暂无图片
暂无图片
12
暂无图片
暂无图片
暂无图片

转 CentOS7安装OpenStack(Rocky版)-01.控制节点的系统环境准备

原创 哇哈哈 2020-10-14
1215

CentOS7安装OpenStack(Rocky版)-01.控制节点的系统环境准备
阅读目录
• 1.0.系统环境
• 1.1.配置域名解析
o 1)配置主机名
o 2)配置主机名解析
• 1.2.关闭防火墙和selinux
o 1)关闭iptables
o 2)关闭 selinux
• 1.3.配置时间同步
o 1)在控制端配置时间同步服务
o 2)编辑配置文件确认有以下配置
o 3)重启ntp服务,并配置开机自启动
o 4)设置时区,同步时间
• 1.4.配置相关yum源
o 1)配置阿里云的base和epel源
o 2)安装openstack-rocky的仓库
o 3)更新软件包
o 4)安装openstack客户端相关软件
• 1.5.在控制节点安装数据库
o 1)安装mariadb相关软件包
o 2)创建openstack的数据库配置文件
o 3)启动数据库设置开机启动
o 4)初始化数据库并重新启动
o 5)创建openstack相关数据库,进行授权
• 1.6.在控制节点安装消息队列rabbitmq
o 1)安装rabbitmq-server
o 2)启动rabbitmq,并配置自启动
o 3)创建消息队列中openstack账号及密码
o 4)启用rabbitmq_management插件实现 web 管理
o 5)浏览器访问RabbitMQ进行测试
• 1.7.在控制节点上安装Memcached
o 1)安装Memcached用于缓存令牌
o 2)修改memcached配置文件
o 3)启动memcached并设置开机自启动
• 1.8.在控制节点上安装Etcd服务
o 1)安装etcd服务
o 2)修改etcd配置文件
o 3)启动etcd并设置开机自启动


分享一下Rocky版本的OpenStack安装管理经验:
OpenStack每半年左右更新一版,目前是版本是201808月发布的版本-R版(Rocky),目前版本安装方法优化较好,不过依然是比较复杂
官方文档地址:https://docs.openstack.org/install-guide/openstack-services.html
本文主要分享控制节点的环境配置方法:
---------------- 完美的分割线 ------------------
回到顶部
1.0.系统环境
1)生产测试应用的服务器最好是物理机,虚拟目前可以完成搭建测试体验
2)系统选择是目前的最新版本:CentOS Linux release 7.5.1804 (Core)
3)控制节点Controller :192.168.1.81
计算节点Nova:192.168.1.82
回到顶部
1.1.配置域名解析
1)配置主机名

主机名设置好就不能修改,否则会出问题,控制节点和计算节点配置相同,且都需要配置

hostname openstack01.zuiyoujie.com
hostname
echo “openstack01.zuiyoujie.com”> /etc/hostname
cat /etc/hostname
2)配置主机名解析
vim /etc/hosts

192.168.1.81 openstack01.zuiyoujie.com controller
192.168.1.82 openstack02.zuiyoujie.com compute02 block02 object02

配置主机名的FQDN格式,和集群内部角色名称方便后续配置

回到顶部
1.2.关闭防火墙和selinux
1)关闭iptables

在CentOS7上面是firewalld

systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl status firewalld.service
2)关闭 selinux
setenforce 0
getenforce
sed -i ‘s#SELINUX=enforcing#SELINUX=disabled#g’ /etc/sysconfig/selinux
grep SELINUX=disabled /etc/sysconfig/selinux
回到顶部
1.3.配置时间同步
1)在控制端配置时间同步服务
yum install chrony -y
2)编辑配置文件确认有以下配置
vim /etc/chrony.conf

server ntp1.aliyun.com iburst
server ntp2.aliyun.com iburst
allow 192.168.1.0/24

3)重启ntp服务,并配置开机自启动
systemctl restart chronyd.service
systemctl status chronyd.service
systemctl enable chronyd.service
systemctl list-unit-files |grep chronyd.service
4)设置时区,同步时间
timedatectl set-timezone Asia/Shanghai
chronyc sources
timedatectl status

配置完成,如下显示

[root@openstack01 ~]# chronyc sources
210 Number of sources = 2
MS Name/IP address Stratum Poll Reach LastRx Last sample

^* 120.25.115.20 2 6 17 9 +17ms[ +22ms] +/- 34ms
^+ 203.107.6.88 2 6 17 9 +3029us[+8251us] +/- 54ms
[root@openstack01 ~]# timedatectl status
Local time: 一 2018-10-22 15:13:51 CST
Universal time: 一 2018-10-22 07:13:51 UTC
RTC time: 一 2018-10-22 07:13:52
Time zone: Asia/Shanghai (CST, +0800)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: n/a

回到顶部
1.4.配置相关yum源
1)配置阿里云的base和epel源
mv -f /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.backup
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
2)安装openstack-rocky的仓库
yum install centos-release-openstack-rocky -y
yum clean all
yum makecache

也可以手动创建OpenStack的阿里云yum源地址

vim /etc/yum.repos.d/CentOS-OpenStack-Rocky.repo

[centos-openstack-rocky]
name=CentOS-7 - OpenStack rocky
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/openstack-rocky/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud

[centos-openstack-rocky-test]
name=CentOS-7 - OpenStack rocky Testing
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/openstack-rocky/
gpgcheck=0
enabled=0

[centos-openstack-rocky-debuginfo]
name=CentOS-7 - OpenStack rocky - Debug
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud

[centos-openstack-rocky-source]
name=CentOS-7 - OpenStack rocky - Source
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/openstack-rocky/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud

[rdo-trunk-rocky-tested]
name=OpenStack rocky Trunk Tested
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/rdo-trunk-rocky-tested/
gpgcheck=0
enabled=0

3)更新软件包
yum update -y
4)安装openstack客户端相关软件
yum install python-openstackclient openstack-selinux -y
1.5.在控制节点安装数据库

可以修改系统内核更改最大连接数和文件句柄数

1)安装mariadb相关软件包

CentOS7.5默认数据库为maraidb

yum install mariadb mariadb-server MySQL-python python2-PyMySQL -y
2)创建openstack的数据库配置文件

vim /etc/my.cnf.d/mariadb_openstack.cnf

在[mysqld]添加以下配置


[mysqld]
bind-address = 0.0.0.0
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
init-connect = ‘SET NAMES utf8’

配置解释:

default-storage-engine = innodb 默认存储引擎
innodb_file_per_table 使用独享表空间模式,每一个表都会建一个表空间,都会有索引文件,查索引快,共享表空间,共用一个表空间和索引,如果有损坏很难修复,比如说zabbix用到的数据库如果不使用的独享表空间,很难进行优化
collation-server = utf8_general_ci
init-connect = ‘SET NAMES utf8’
character-set-server = utf8
3)启动数据库设置开机启动
systemctl restart mariadb.service
systemctl status mariadb.service

systemctl enable mariadb.service
systemctl list-unit-files |grep mariadb.service
4)初始化数据库并重新启动

设置密码,默认密码为空,然后输入密码123456,一路y回车

/usr/bin/mysql_secure_installation
systemctl restart mariadb.service

注意:生产环境可以使用pwgen工具生成数据库密码

openssl rand -hex 10
5)创建openstack相关数据库,进行授权

测试下数据库,相关的数据库在需要时单独创建

mysql -p123456

flush privileges;
show databases;
select user,host from mysql.user;
exit

至此,数据库配置完毕

回到顶部
1.6.在控制节点安装消息队列rabbitmq
消息队列( MQ)全称为 Message Queue, 是一种应用程序对应用程序的通信方法。应用程序通过读写出入队列的消息(针对应用程序的数据)来通信,而无需专用连接来链接它们。
消息传递指的是程序之间通过在消息中发送数据进行通信,而不是通过直接调用彼此来通信,直接调用通常是用于诸如远程过程调用的技术。排队指的是应用程序通过 队列来通信。
队列的使用除去了接收和发送应用程序同时执行的要求。
RabbitMQ 是一个在 AMQP 基础上完整的,可复用的企业消息系统。他遵循 Mozilla Public License 开源协议。
1)安装rabbitmq-server
yum install rabbitmq-server -y
2)启动rabbitmq,并配置自启动

端口5672,15672,用于拍错

systemctl start rabbitmq-server.service
systemctl status rabbitmq-server.service

systemctl enable rabbitmq-server.service
systemctl list-unit-files |grep rabbitmq-server.service
3)创建消息队列中openstack账号及密码

添加openstack用户和密码,配置用户权限,配置读,写权限

rabbitmqctl add_user openstack openstack
rabbitmqctl set_permissions openstack “." ".” “."
rabbitmqctl set_permissions -p “/” openstack ".
” “." ".
4)启用rabbitmq_management插件实现 web 管理

查看支持的插件

rabbitmq-plugins list

启用web管理插件,需要重启服务使之生效

rabbitmq-plugins enable rabbitmq_management
systemctl restart rabbitmq-server.service
rabbitmq-plugins list
lsof -i:15672
5)浏览器访问RabbitMQ进行测试
访问地址:http://192.168.1.81:15672

默认用户名密码都是guest

web界面可以管理创建用户,管理权限

rabbitmq配置完毕

回到顶部
1.7.在控制节点上安装Memcached
认证服务认证缓存使用Memcached缓存令牌。缓存服务memecached运行在控制节点。在生产部署中,推荐联合启用防火墙、认证和加密保证它的安全。
1)安装Memcached用于缓存令牌
yum install memcached python-memcached -y
2)修改memcached配置文件
vim /etc/sysconfig/memcached

OPTIONS="-l 127.0.0.1,controller"

如果没有启用IPv6地址需要删掉::1的地址绑定

3)启动memcached并设置开机自启动
systemctl start memcached.service
systemctl status memcached.service
netstat -anptl|grep memcached

systemctl enable memcached.service
systemctl list-unit-files |grep memcached.service

memcached参数说明:

-d选项是作为守护进程在后台运行
-m是分配给Memcache使用的内存数量,单位是MB,我这里是10MB,
-u是运行Memcache的用户,我这里是root,
-l是监听的服务器IP地址,如果有多个地址的话
-p是设置Memcache监听的端口,我这里设置了12000,最好是1024以上的端口,
-c选项是最大运行的并发连接数,默认是1024,我这里设置了256,按照你服务器的负载量来设定,
-P是设置保存Memcache的pid文件
-vv是以very vrebose模式启动,将调试信息和错误输出到控制台

至此,memcached配置完毕

回到顶部
1.8.在控制节点上安装Etcd服务

这个Etcd服务是新加入的,用于自动化配置

1)安装etcd服务
yum install etcd -y
2)修改etcd配置文件

vim /etc/etcd/etcd.conf

#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS=“http://192.168.1.81:2380”
ETCD_LISTEN_CLIENT_URLS=“http://192.168.1.81:2379”
ETCD_NAME=“controller”

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS=“http://192.168.1.81:2380”
ETCD_ADVERTISE_CLIENT_URLS=“http://192.168.1.81:2379”
ETCD_INITIAL_CLUSTER=“controller=http://192.168.1.81:2380”
ETCD_INITIAL_CLUSTER_TOKEN=“etcd-cluster-01”
ETCD_INITIAL_CLUSTER_STATE=“new”

注意上面的IP地址不能用controller替代,无法解析

3)启动etcd并设置开机自启动
systemctl start etcd.service
systemctl status etcd.service
netstat -anptl|grep etcd

systemctl enable etcd.service
systemctl list-unit-files |grep etcd.service

至此,控制节点controller就完成基础环境的配置,下面可以开始安装 openstack 的组件

虚拟机的话配置完成可以关机做快照

CentOS7安装OpenStack(Rocky版)-02.安装Keyston认证服务组件(控制节点)
阅读目录
• 2.0.keystone认证服务
• 2.1.在控制节点创建keystone相关数据库
o 1)创建keystone数据库并授权
• 2.2.在控制节点安装keystone相关软件包
o 1)安装keystone相关软件包
o 2)快速修改keystone配置
• 2.3.初始化同步keystone数据库
o 1)同步keystone数据库(44张)
o 2)同步完成进行连接测试
• 2.4.初始化Fernet令牌库
• 2.5.配置启动Apache(httpd)
o 1)修改httpd主配置文件
o 2)配置虚拟主机
o 3)启动httpd并配置开机自启动
• 2.6.初始化keystone认证服务
o 1)创建 keystone 用户,初始化的服务实体和API端点
o 2)临时配置管理员账户的相关变量进行管理
• 2.7.创建keystone的一般实例
o 1)创建一个名为example的keystone域
o 2)为keystone系统环境创建名为service的项目提供服务
o 3)创建myproject项目和对应的用户及角色
o 4)在默认域创建myuser用户
o 5)在role表创建myrole角色
o 6)将myrole角色添加到myproject项目中和myuser用户组中
• 2.8.验证操作keystone是否安装成功
o 1)去除环境变量
o 2)作为管理员用户去请求一个认证的token
o 3)使用普通用户获取认证token
• 2.9.创建OpenStack客户端环境脚本
o 1)创建admin用户的环境管理脚本
o 2)创建普通用户myuser的客户端环境变量脚本
o 3)测试环境管理脚本
o 4)请求认证令牌


本文分享openstack的认证服务组件keystone
--------------- 完美的分割线 ----------------
回到顶部
2.0.keystone认证服务
1)用户与认证:用户权限与用户行为跟踪
User 用户
Tenant 租户
Token 令牌
Role 角色
2)服务目录:提供一个服务目录,包括所有服务项与相关API的端点
Service 服务
Endpoint 端点
回到顶部
2.1.在控制节点创建keystone相关数据库
1)创建keystone数据库并授权

mysql -p123456

CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@‘localhost’ IDENTIFIED BY ‘keystone’;
GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone’@’%’ IDENTIFIED BY ‘keystone’;
flush privileges;
show databases;
select user,host from mysql.user;
exit

回到顶部
2.2.在控制节点安装keystone相关软件包
1)安装keystone相关软件包

配置Apache服务,使用带有“mod_wsgi”的HTTP服务器来相应认证服务请求,端口为5000和35357, 默认情况下,Kestone服务仍然监听这些端口

yum install openstack-keystone httpd mod_wsgi -y
yum install openstack-keystone python-keystoneclient openstack-utils -y
2)快速修改keystone配置

下面使用的快速配置方法需要安装Openstack-utils才可以实现

openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:keystone@controller/keystone
openstack-config --set /etc/keystone/keystone.conf token provider fernet

注意:keystone不需要连接rabbitmq

查看生效的配置

egrep -v “#|$” /etc/keystone/keystone.conf

其他方式查看生效配置

grep ‘[1]’ /etc/keystone/keystone.conf

实例演示:

[root@openstack01 tools]# grep ‘[2]’ /etc/keystone/keystone.conf
connection = mysql+pymysql://keystone:keystone@controller/keystone
provider = fernet

keystone不需要启动,通过http服务进行调用

回到顶部
2.3.初始化同步keystone数据库
1)同步keystone数据库(44张)
su -s /bin/sh -c “keystone-manage db_sync” keystone
2)同步完成进行连接测试

保证所有需要的表已经建立,否则后面可能无法进行下去

mysql -h192.168.1.81 -ukeystone -pkeystone -e “use keystone;show tables;”
实例演示:

[root@openstack01 ~]# mysql -h192.168.1.81 -ukeystone -pkeystone -e “use keystone;show tables;”
±----------------------------+
| Tables_in_keystone |
±----------------------------+
| access_token |
| application_credential |
| application_credential_role |
| assignment |
| config_register |
| consumer |
| credential |
| endpoint |
| endpoint_group |
| federated_user |
| federation_protocol |
| group |
| id_mapping |
| identity_provider |
| idp_remote_ids |
| implied_role |
| limit |
| local_user |
| mapping |
| migrate_version |
| nonlocal_user |
| password |
| policy |
| policy_association |
| project |
| project_endpoint |
| project_endpoint_group |
| project_tag |
| region |
| registered_limit |
| request_token |
| revocation_event |
| role |
| sensitive_config |
| service |
| service_provider |
| system_assignment |
| token |
| trust |
| trust_role |
| user |
| user_group_membership |
| user_option |
| whitelisted_config |
±----------------------------+
[root@openstack01 ~]# mysql -h192.168.1.81 -ukeystone -pkeystone -e “use keystone;show tables;”|wc -l
45

回到顶部
2.4.初始化Fernet令牌库

Initialize Fernet key repositories:

关于Fernet令牌可以参考:https://blog.csdn.net/wllabs/article/details/79064094

以下命令无返回信息

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
回到顶部
2.5.配置启动Apache(httpd)
1)修改httpd主配置文件
vim /etc/httpd/conf/httpd.conf +95

ServerName controller

或者

sed -i “s/#ServerName www.example.com:80/ServerName 192.168.1.81/” /etc/httpd/conf/httpd.conf
cat /etc/httpd/conf/httpd.conf |grep ServerName
2)配置虚拟主机

创建keystone虚拟主机配置文件的快捷方式,也可以复制过来

ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

或者可以手动编辑创建该文件

cat /usr/share/keystone/wsgi-keystone.conf

[root@openstack01 ~]# cat /usr/share/keystone/wsgi-keystone.conf
Listen 5000

<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
LimitRequestBody 114688
= 2.4>
ErrorLogFormat “%{cu}t %M”

ErrorLog /var/log/httpd/keystone.log
CustomLog /var/log/httpd/keystone_access.log combined

<Directory /usr/bin>
    <IfVersion >= 2.4>
        Require all granted
    </IfVersion>
    <IfVersion < 2.4>
        Order allow,deny
        Allow from all
    </IfVersion>
</Directory>

Alias /identity /usr/bin/keystone-wsgi-public
<Location /identity>
SetHandler wsgi-script
Options +ExecCGI

WSGIProcessGroup keystone-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On

3)启动httpd并配置开机自启动
systemctl start httpd.service
systemctl status httpd.service
netstat -anptl|grep httpd

systemctl enable httpd.service
systemctl list-unit-files |grep httpd.service

如果http起不来,需要关闭 selinux 或者安装 yum install openstack-selinux

实例演示:

[root@openstack01 ~]# systemctl start httpd.service
[root@openstack01 ~]# systemctl status httpd.service
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled)
Active: active (running) since 五 2018-10-26 18:06:20 CST; 98ms ago
Docs: man:httpd(8)
man:apachectl(8)
Main PID: 1978 (httpd)
Status: “Processing requests…”
CGroup: /system.slice/httpd.service
├─1978 /usr/sbin/httpd -DFOREGROUND
├─1981 (wsgi:keystone- -DFOREGROUND
├─1982 (wsgi:keystone- -DFOREGROUND
├─1983 (wsgi:keystone- -DFOREGROUND
├─1984 (wsgi:keystone- -DFOREGROUND
├─1985 (wsgi:keystone- -DFOREGROUND
├─1986 /usr/sbin/httpd -DFOREGROUND
├─1988 /usr/sbin/httpd -DFOREGROUND
└─1989 /usr/sbin/httpd -DFOREGROUND

10月 26 18:06:20 openstack01.zuiyoujie.com systemd[1]: Starting The Apache HTTP Server…
10月 26 18:06:20 openstack01.zuiyoujie.com systemd[1]: Started The Apache HTTP Server.
[root@openstack01 ~]# netstat -anptl|grep httpd
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 1978/httpd
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1978/httpd
[root@openstack01 ~]# systemctl enable httpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@openstack01 ~]# systemctl list-unit-files |grep httpd.service
httpd.service enabled

至此,http服务配置完成

回到顶部
2.6.初始化keystone认证服务
1)创建 keystone 用户,初始化的服务实体和API端点

在之前的版本(queens之前),引导服务需要2个端口提供服务(用户5000和管理35357),本版本通过同一个端口提供服务

创建keystone服务实体和身份认证服务,以下三种类型分别为公共的、内部的、管理的。

需要创建一个密码ADMIN_PASS,作为登陆openstack的管理员用户,这里创建为123456

以下为命令实例:

keystone-manage bootstrap --bootstrap-password 123456
–bootstrap-admin-url http://controller:5000/v3/
–bootstrap-internal-url http://controller:5000/v3/
–bootstrap-public-url http://controller:5000/v3/
–bootstrap-region-id RegionOne

运行这条命令,会在keystone数据库执增加以下任务,之前的版本需要手动创建:

1)在endpoint表增加3个服务实体的API端点
2)在local_user表中创建admin用户
3)在project表中创建admin和Default项目(默认域)
4)在role表创建3种角色,admin,member和reader
5)在service表中创建identity服务
2)临时配置管理员账户的相关变量进行管理

这里的export OS_PASSWORD要使用上面配置的ADMIN_PASS

export OS_PROJECT_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3

查看声明的变量

env |grep OS_
实例演示:

[root@openstack01 ~]# env|grep OS_
OS_USER_DOMAIN_NAME=Default
OS_PROJECT_NAME=admin
OS_IDENTITY_API_VERSION=3
OS_PASSWORD=123456
OS_AUTH_URL=http://controller:5000/v3
OS_USERNAME=admin
OS_PROJECT_DOMAIN_NAME=Default

附:常用的openstack管理命令,需要应用管理员的环境变量

查看keystone实例相关信息

openstack endpoint list
openstack project list
openstack user list
实例演示:

[root@openstack01 ~]# openstack endpoint list
±---------------------------------±----------±-------------±-------------±--------±----------±---------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
±---------------------------------±----------±-------------±-------------±--------±----------±---------------------------+
| b8dabe6c548e435eb2b1f7efe3b23236 | RegionOne | keystone | identity | True | admin | http://controller:5000/v3/ |
| eb72eb6ea51842feb67ba5849beea48c | RegionOne | keystone | identity | True | internal | http://controller:5000/v3/ |
| f172f6159ad34fbd8e10e0d42828d8cd | RegionOne | keystone | identity | True | public | http://controller:5000/v3/ |
±---------------------------------±----------±-------------±-------------±--------±----------±---------------------------+
[root@openstack01 ~]# openstack project list
±---------------------------------±----------+
| ID | Name |
±---------------------------------±----------+
| 3706708374804e2eb4ed056f55d84666 | admin |
| 84cc7185f2c8461eb19a14968228b272 | myproject |
| b8e318b3c7a844708762169959c34ff8 | service |
±---------------------------------±----------+
[root@openstack01 ~]# openstack user list
±---------------------------------±-------+
| ID | Name |
±---------------------------------±-------+
| cbb2b3830a8f44bc837230bca27ae563 | myuser |
| e5dbfc8b394c41679fd5ce229cdd6ed3 | admin |
±---------------------------------±-------+

删除endpoint

以前的版本单独创建endpoint可能会出错需要删除,新版本已经优化好,只要系统配置没问题,会自动生成一般也不会出错

openstack endpoint delete [ID]
回到顶部
2.7.创建keystone的一般实例

Create a domain, projects, users, and roles

https://docs.openstack.org/keystone/rocky/install/keystone-users-rdo.html
1)创建一个名为example的keystone域

以下命令会在project表中创建名为example的项目

openstack domain create --description “An Example Domain” example
实例演示:

[root@openstack01 ~]# openstack domain create --description “An Example Domain” example
±------------±---------------------------------+
| Field | Value |
±------------±---------------------------------+
| description | An Example Domain |
| enabled | True |
| id | 17254ea898de477ca4a1f6f3cbc6c5bc |
| name | example |
| tags | [] |
±------------±---------------------------------+

2)为keystone系统环境创建名为service的项目提供服务

用于常规(非管理)任务,需要使用无特权用户

以下命令会在project表中创建名为service的项目

openstack project create --domain default --description “Service Project” service
实例演示:

[root@openstack01 ~]# openstack project create --domain default --description “Service Project” service
±------------±---------------------------------+
| Field | Value |
±------------±---------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | b8e318b3c7a844708762169959c34ff8 |
| is_domain | False |
| name | service |
| parent_id | default |
| tags | [] |
±------------±---------------------------------+

3)创建myproject项目和对应的用户及角色

作为一般用户(非管理员)的项目,为普通用户提供服务

以下命令会在project表中创建名为myproject项目

openstack project create --domain default --description “Demo Project” myproject
实例演示:

[root@openstack01 ~]# openstack project create --domain default --description “Demo Project” myproject
±------------±---------------------------------+
| Field | Value |
±------------±---------------------------------+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | 84cc7185f2c8461eb19a14968228b272 |
| is_domain | False |
| name | myproject |
| parent_id | default |
| tags | [] |
±------------±---------------------------------+

4)在默认域创建myuser用户

使用–password选项为直接配置明文密码,使用–password-prompt选项为交互式输入密码

以下命令会在local_user表增加myuser用户

openstack user create --domain default --password-prompt myuser # 交互式输入密码

openstack user create --domain default --password=myuser myuser # 直接创建用户和密码

实例演示:

[root@openstack01 ~]# openstack user create --domain default --password-prompt myuser
User Password:
Repeat User Password:
±--------------------±---------------------------------+
| Field | Value |
±--------------------±---------------------------------+
| domain_id | default |
| enabled | True |
| id | cbb2b3830a8f44bc837230bca27ae563 |
| name | myuser |
| options | {} |
| password_expires_at | None |
±--------------------±---------------------------------+

5)在role表创建myrole角色
openstack role create myrole
实例演示:

[root@openstack01 ~]# openstack role create myrole
±----------±---------------------------------+
| Field | Value |
±----------±---------------------------------+
| domain_id | None |
| id | 75ac33f79cc945afa42a18a3dd0ba0ad |
| name | myrole |
±----------±---------------------------------+

6)将myrole角色添加到myproject项目中和myuser用户组中

以下命令无返回,数据表操作不太明显

openstack role add --project myproject --user myuser myrole
回到顶部
2.8.验证操作keystone是否安装成功
1)去除环境变量

关闭临时认证令牌机制,获取 token,验证keystone配置成功

unset OS_AUTH_URL OS_PASSWORD
env |grep OS_
2)作为管理员用户去请求一个认证的token

测试是否可以使用admin账户进行登陆认证,请求认证令牌

openstack --os-auth-url http://controller:5000/v3
–os-project-domain-name Default --os-user-domain-name Default
–os-project-name admin --os-username admin token issue
实例演示:

[root@openstack01 ~]# openstack --os-auth-url http://controller:5000/v3 \

–os-project-domain-name Default --os-user-domain-name Default
–os-project-name admin --os-username admin token issue
Password:
±-----------±----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
±-----------±----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2018-10-26T11:48:40+0000 |
| id | gAAAAABb0vEIENgBaYEBJZSJX7RDelXdM2sHi_hbfT-FHTjd3z5j5Mt-sssJpW1EXeWVAbMdyBI2t9XNCxG5m1XNm_2k1xWP7WnbOYAp1rl2FZCwz4LL0F-mER_bOW-HnE0rjA6YvP0MzW4HVg0eEE_6zACr0R0NaaVytK_eRsvO_Lhco6vacYY |
| project_id | 3706708374804e2eb4ed056f55d84666 |
| user_id | e5dbfc8b394c41679fd5ce229cdd6ed3 |
±-----------±----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

3)使用普通用户获取认证token

以下命令使用”myuser“用户的密码和API端口5000,只允许对身份认证服务API的常规(非管理)访问。

openstack --os-auth-url http://controller:5000/v3
–os-project-domain-name Default --os-user-domain-name Default
–os-project-name myproject --os-username myuser token issue
实例演示:

[root@openstack01 ~]# openstack --os-auth-url http://controller:5000/v3 \

–os-project-domain-name Default --os-user-domain-name Default
–os-project-name myproject --os-username myuser token issue
Password:
±-----------±----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
±-----------±----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2018-10-26T11:49:18+0000 |
| id | gAAAAABb0vEuxOrgkmLfcZJl8vB6dJyrHFtvxBT1m7qLYzuD-WkOVoQUzE9mTGcrKE6CrZbLU57Nc7mv-50-ggH9pf2qrW5uWQu7MRJcUb3rgpmoYn7EVdv8X0lGK3IiWEPSF48u1b2y7mEmvYb7TGOFO8l87of6L2aaJmdMxp9KgM87_3Mu2-g |
| project_id | 84cc7185f2c8461eb19a14968228b272 |
| user_id | cbb2b3830a8f44bc837230bca27ae563 |
±-----------±----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

回到顶部
2.9.创建OpenStack客户端环境脚本

Create OpenStack client environment scripts

上面使用环境变量和命令选项的组合通过“openstack”客户端与身份认证服务交互。

为了提升客户端操作的效率,OpenStack支持简单的客户端环境变量脚本即OpenRC 文件,我这里使用自定义的文件名

1)创建admin用户的环境管理脚本

vim admin-openrc

cd /server/tools
vim keystone-admin-pass.sh

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=123456
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

env |grep OS_

应用:

如果修改dashboard登陆密码忘记了,可以使用admin_token认证机制修改登陆密码
2)创建普通用户myuser的客户端环境变量脚本

vim keystone-myuser-pass.sh

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=myuser
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

3)测试环境管理脚本

使用脚本加载相关客户端配置,以便快速使用特定租户和用户运行客户端

source keystone-admin-pass.sh
4)请求认证令牌
openstack token issue
实例演示:

[root@openstack01 tools]# openstack token issue
±-----------±----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
±-----------±----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires | 2018-10-26T12:13:28+0000 |
| id | gAAAAABb0vbYr–LRd1NJ9ZXH68zSR4mIW4hDr6UqqiPmsA7vNEGDcMx8o-6Ihy8o47c5jo5GInOCe9KpKMfbXtdWPz6QkkWzZcFMqwXYS4tUI8DjjamEUBqFwlI10Oxbq7pEIGKVtFdMrOHy3EoLmE1rjY0p4DDm48pt3u8ON807nr0MUa1zIE |
| project_id | 3706708374804e2eb4ed056f55d84666 |
| user_id | e5dbfc8b394c41679fd5ce229cdd6ed3 |
±-----------±----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

可以看到user_id和上面用命令获取到的是一样的,说明配置成功

至此,keystone安装完毕

CentOS7安装OpenStack(Rocky版)-03.安装Glance镜像服务组件(控制节点)
阅读目录
• 3.0.glance概述
o 1)glance作用和特性
o 2)glance镜像服务的组件
• 3.1.在控制端安装镜像服务glance
o 1)创建glance数据库
• 3.2.在keystone上面注册glance
o 1)在keystone上创建glance用户
o 2)在keystone上将glance用户添加为service项目的admin角色(权限)
o 3)创建glance镜像服务的实体
o 4)创建镜像服务的 API 端点(endpoint)
• 3.3.安装glance相关软件
o 1)检查Python版本
o 2)安装glance软件
o 3)执行以下命令可以快速配置glance-api.conf
o 4)执行以下命令可以快速配置glance-registry.conf
• 4.4.同步glance数据库
o 1)为glance镜像服务初始化同步数据库
o 2)同步完成进行连接测试
• 3.5.启动glance镜像服务
o 1)启动glance镜像服务、并配置开机自启动
o 2)其他命令:重启,停止
• 3.6.检查确认glance安装正确
o 1)下载镜像
o 2)获取管理员权限
o 3)上传镜像到glance
o 4)查看镜像


上篇文章分享了keystone的安装配置,本文接着分享openstack的镜像服务glance。
--------------- 完美的分割线 ----------------
回到顶部
3.0.glance概述
1)glance作用和特性
用户使用镜像服务 (glance) 允许来创建,查询虚拟机镜像。它提供了一个 REST API,允许查询虚拟机镜像的 metadata 并获取一个现存的镜像
可以将虚拟机镜像存储到各种位置,如简单的文件系统或者是对象存储系统,例如 OpenStack 对象存储, 并通过镜像服务使用
上传指定的文件作为后端配置镜像服务,默认目录是 /var/lib/glance/images/。
2)glance镜像服务的组件
glance-api:
用于接收镜像REST API的调用,诸如镜像查找,获取,上传,删除
glance-registry:
用于与mysql数据库交互,监听端口为9191,
提供镜像元数据相关的REST接口,用于存储、处理和恢复镜像的元数据(metadata),元数据包括项诸如大小和类型。
通过glance-registry可以向数据库中写入或获取镜像的各种数据
其中有两张表,image表保存了镜像格式大小等信息,image property表保存进行的定制化信息
注意:glance-registry是私有内部服务,用于服务OpenStack Image服务。不能向用户暴露
image:镜像文件的存储仓库
支持多种类型的仓库,它们有普通文件系统、对象存储、RADOS块设备、HTTP、以及亚马逊S3。另外,其中一些仓库仅支持只读方式使用。
image store:
是一个存储的接口层,通过这个接口glance可以获取镜像,支持的存储有亚马逊的S3,openstack本身的swift,还有ceph,sheepdog,GlusterFS等分布式存储
image store是镜像保存与获取的接口,仅仅是一个接口层,具体的实现需要外部的存储支持
数据库:
存放镜像元数据,用户是可以依据个人喜好选择数据库的,多数的部署使用MySQL或SQLite。
元数据定义服务:
通用的API,是用于为厂商,管理员,服务,以及用户自定义元数据。
这种元数据可用于不同的资源,例如镜像,工件,卷,配额以及集合。
一个定义包括了新属性的键,描述,约束以及可以与之关联的资源的类型。
回到顶部
3.1.在控制端安装镜像服务glance
1)创建glance数据库

mysql -p123456

CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@‘localhost’ IDENTIFIED BY ‘glance’;
GRANT ALL PRIVILEGES ON glance.* TO ‘glance’@’%’ IDENTIFIED BY ‘glance’;
flush privileges;
exit

回到顶部
3.2.在keystone上面注册glance
1)在keystone上创建glance用户

以下命令在local_user表创建glance用户

cd /server/tools
source keystone-admin-pass.sh
openstack user create --domain default --password=glance glance
openstack user list

实例演示:

[root@openstack01 tools]# openstack user create --domain default --password=glance glance
±--------------------±---------------------------------+
| Field | Value |
±--------------------±---------------------------------+
| domain_id | default |
| enabled | True |
| id | 82a27e65ca644a5eadcd54ff44e5e05b |
| name | glance |
| options | {} |
| password_expires_at | None |
±--------------------±---------------------------------+
[root@openstack01 tools]# openstack user list
±---------------------------------±-------+
| ID | Name |
±---------------------------------±-------+
| 82a27e65ca644a5eadcd54ff44e5e05b | glance |
| cbb2b3830a8f44bc837230bca27ae563 | myuser |
| e5dbfc8b394c41679fd5ce229cdd6ed3 | admin |
±---------------------------------±-------+

2)在keystone上将glance用户添加为service项目的admin角色(权限)

以下命令无输出

openstack role add --project service --user glance admin
3)创建glance镜像服务的实体

以下命令在service表中增加glance项目

openstack service create --name glance --description “OpenStack Image” image
openstack service list

实例演示:

[root@openstack01 tools]# openstack service create --name glance --description “OpenStack Image” image
±------------±---------------------------------+
| Field | Value |
±------------±---------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 6c31f22e259b460fa0168ac206265c30 |
| name | glance |
| type | image |
±------------±---------------------------------+
[root@openstack01 tools]# openstack service list
±---------------------------------±---------±---------+
| ID | Name | Type |
±---------------------------------±---------±---------+
| 63c882889b204d81a9867f9b7c0ba7aa | keystone | identity |
| 6c31f22e259b460fa0168ac206265c30 | glance | image |
±---------------------------------±---------±---------+

4)创建镜像服务的 API 端点(endpoint)

以下命令会在endpoint表增加3条项目

openstack endpoint create --region RegionOne image public http://192.168.1.81:9292
openstack endpoint create --region RegionOne image internal http://192.168.1.81:9292
openstack endpoint create --region RegionOne image admin http://192.168.1.81:9292
openstack endpoint list

实例演示:

[root@openstack01 tools]# openstack endpoint create --region RegionOne image public http://controller:9292
±-------------±---------------------------------+
| Field | Value |
±-------------±---------------------------------+
| enabled | True |
| id | f13c44af4e8d45d5b0229ea870f2c24f |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 6c31f22e259b460fa0168ac206265c30 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
±-------------±---------------------------------+
[root@openstack01 tools]# openstack endpoint create --region RegionOne image internal http://controller:9292
±-------------±---------------------------------+
| Field | Value |
±-------------±---------------------------------+
| enabled | True |
| id | 756084d018c948039d2ae55b13fc7d4a |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 6c31f22e259b460fa0168ac206265c30 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
±-------------±---------------------------------+
[root@openstack01 tools]# openstack endpoint create --region RegionOne image admin http://controller:9292
±-------------±---------------------------------+
| Field | Value |
±-------------±---------------------------------+
| enabled | True |
| id | 7226f8f9c7164214b815821b77ae3ce6 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 6c31f22e259b460fa0168ac206265c30 |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
±-------------±---------------------------------+
[root@openstack01 tools]# openstack endpoint list
±---------------------------------±----------±-------------±-------------±--------±----------±---------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
±---------------------------------±----------±-------------±-------------±--------±----------±---------------------------+
| 7226f8f9c7164214b815821b77ae3ce6 | RegionOne | glance | image | True | admin | http://controller:9292 |
| 756084d018c948039d2ae55b13fc7d4a | RegionOne | glance | image | True | internal | http://controller:9292 |
| b8dabe6c548e435eb2b1f7efe3b23236 | RegionOne | keystone | identity | True | admin | http://controller:5000/v3/ |
| eb72eb6ea51842feb67ba5849beea48c | RegionOne | keystone | identity | True | internal | http://controller:5000/v3/ |
| f13c44af4e8d45d5b0229ea870f2c24f | RegionOne | glance | image | True | public | http://controller:9292 |
| f172f6159ad34fbd8e10e0d42828d8cd | RegionOne | keystone | identity | True | public | http://controller:5000/v3/ |
±---------------------------------±----------±-------------±-------------±--------±----------±---------------------------+

至此,glance在keystone上面注册完成,可以进行安装

回到顶部
3.3.安装glance相关软件
1)检查Python版本

在安装glance前需要确认系统的Python版本

在当前版本中有一个bug在Python3.5中可能会有ssl方面的问题,以下是详情页面

https://docs.openstack.org/glance/rocky/install/get-started.html#running-glance-under-python3
Running Glance Under Python3¶
You should always run Glance under whatever version of Python your distribution of OpenStack specifies.
If you are building OpenStack yourself from source, Glance is currently supported to run under Python2 (specifically, Python 2.7 or later).
Some deployment configuration is required if you wish to run Glance under Python3. Glance is tested with unit and functional tests running Python 3.5. The eventlet-based server that Glance runs, however, is currently affected by a bug that prevents SSL handshakes from completing (see Bug #1482633). Thus if you wish to run Glance under Python 3.5, you must deploy Glance in such a way that SSL termination is handled by something like HAProxy before calls reach Glance.
python --version
[root@openstack01 tools]# python --version
Python 2.7.5
2)安装glance软件
yum install openstack-glance python-glance python-glanceclient -y
3)执行以下命令可以快速配置glance-api.conf

openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:glance@controller/glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password glance
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/

4)执行以下命令可以快速配置glance-registry.conf

openstack-config --set /etc/glance/glance-registry.conf database connection mysql+pymysql://glance:glance@controller/glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password glance
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

查看生效的配置

[root@openstack01 tools]# grep ‘[3]’ /etc/glance/glance-api.conf
connection = mysql+pymysql://glance:glance@controller/glance
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance
flavor = keystone

[root@openstack01 tools]# grep ‘[4]’ /etc/glance/glance-registry.conf
connection = mysql+pymysql://glance:glance@controller/glance
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance
flavor = keystone

至此,glance服务安装完毕,该服务需要启动

回到顶部
4.4.同步glance数据库
1)为glance镜像服务初始化同步数据库

生成的相关表(15张表)

su -s /bin/sh -c “glance-manage db_sync” glance
2)同步完成进行连接测试

保证所有需要的表已经建立,否则后面可能无法进行下去

mysql -h192.168.1.81 -uglance -pglance -e “use glance;show tables;”
实例演示:

[root@openstack01 tools]# mysql -h192.168.1.81 -uglance -pglance -e “use glance;show tables;”
±---------------------------------+
| Tables_in_glance |
±---------------------------------+
| alembic_version |
| image_locations |
| image_members |
| image_properties |
| image_tags |
| images |
| metadef_namespace_resource_types |
| metadef_namespaces |
| metadef_objects |
| metadef_properties |
| metadef_resource_types |
| metadef_tags |
| migrate_version |
| task_info |
| tasks |
±---------------------------------+

回到顶部
3.5.启动glance镜像服务
1)启动glance镜像服务、并配置开机自启动
systemctl start openstack-glance-api.service openstack-glance-registry.service
systemctl status openstack-glance-api.service openstack-glance-registry.service

systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl list-unit-files |grep openstack-glance*
2)其他命令:重启,停止
systemctl restart openstack-glance-api.service openstack-glance-registry.service
systemctl stop openstack-glance-api.service openstack-glance-registry.service
实例演示:

[root@openstack01 tools]# systemctl start openstack-glance-api.service openstack-glance-registry.service
[root@openstack01 tools]# systemctl status openstack-glance-api.service openstack-glance-registry.service
● openstack-glance-api.service - OpenStack Image Service (code-named Glance) API server
Loaded: loaded (/usr/lib/systemd/system/openstack-glance-api.service; disabled; vendor preset: disabled)
Active: active (running) since 五 2018-10-26 21:54:42 CST; 237ms ago
Main PID: 5420 (glance-api)
CGroup: /system.slice/openstack-glance-api.service
└─5420 /usr/bin/python2 /usr/bin/glance-api

10月 26 21:54:42 openstack01.zuiyoujie.com systemd[1]: Started OpenStack Image Service (code-named Glance) API server.
10月 26 21:54:42 openstack01.zuiyoujie.com systemd[1]: Starting OpenStack Image Service (code-named Glance) API server…

● openstack-glance-registry.service - OpenStack Image Service (code-named Glance) Registry server
Loaded: loaded (/usr/lib/systemd/system/openstack-glance-registry.service; disabled; vendor preset: disabled)
Active: active (running) since 五 2018-10-26 21:54:43 CST; 77ms ago
Main PID: 5421 (glance-registry)
CGroup: /system.slice/openstack-glance-registry.service
└─5421 /usr/bin/python2 /usr/bin/glance-registry

10月 26 21:54:43 openstack01.zuiyoujie.com systemd[1]: Started OpenStack Image Service (code-named Glance) Registry server.
10月 26 21:54:43 openstack01.zuiyoujie.com systemd[1]: Starting OpenStack Image Service (code-named Glance) Registry server…
[root@openstack01 tools]# systemctl enable openstack-glance-api.service openstack-glance-registry.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-api.service to /usr/lib/systemd/system/openstack-glance-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-glance-registry.service to /usr/lib/systemd/system/openstack-glance-registry.service.
[root@openstack01 tools]# systemctl list-unit-files |grep openstack-glance*
openstack-glance-api.service enabled
openstack-glance-registry.service enabled
openstack-glance-scrubber.service disabled

回到顶部
3.6.检查确认glance安装正确

可以下载小型的Linux镜像CirrOS用来进行 OpenStack部署测试。

下载地址:http://download.cirros-cloud.net/

1)下载镜像
cd /server/tools
wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
2)获取管理员权限
source keystone-admin-pass.sh
3)上传镜像到glance

使用qcow2磁盘格式, bare容器格式上传镜像到镜像服务并设置公共可见,这样所有的项目都可以访问它

openstack image create “cirros” --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public
实例演示:

[root@openstack01 tools]# openstack image create “cirros” --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public
±-----------------±-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
±-----------------±-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum | f8ab98ff5e73ebab884d80c9dc9c7290 |
| container_format | bare |
| created_at | 2018-10-26T14:02:00Z |
| disk_format | qcow2 |
| file | /v2/images/78f5671b-fb2d-494f-8da7-25dbe425cad6/file |
| id | 78f5671b-fb2d-494f-8da7-25dbe425cad6 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 3706708374804e2eb4ed056f55d84666 |
| properties | os_hash_algo=‘sha512’, os_hash_value=‘f0fd1b50420dce4ca382ccfbb528eef3a38bbeff00b54e95e3876b9bafe7ed2d6f919ca35d9046d437c6d2d8698b1174a335fbd66035bb3edc525d2cdb187232’, os_hidden=‘False’ |
| protected | False |
| schema | /v2/schemas/image |
| size | 13267968 |
| status | active |
| tags | |
| updated_at | 2018-10-26T14:02:00Z |
| virtual_size | None |
| visibility | public |
±-----------------±-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

4)查看镜像
openstack image list
实例演示:
[root@openstack01 tools]# openstack image list
±-------------------------------------±-------±-------+
| ID | Name | Status |
±-------------------------------------±-------±-------+
| 78f5671b-fb2d-494f-8da7-25dbe425cad6 | cirros | active |
±-------------------------------------±-------±-------+

至此glance镜像服务安装完成,启动成功

CentOS7安装OpenStack(Rocky版)-04.安装Nova计算服务(控制节点)
阅读目录
• 4.1.在控制节点安装nova计算服务
o 1)创建nova相关数据库
• 4.2.在keystone上面注册nova服务
o 1)在keystone上创建nova用户
o 2)在keystone上将nova用户配置为admin角色并添加进service项目
o 3)创建nova计算服务的实体
o 4)创建计算服务的API端点(endpoint)
o 5)这个版本的nova增加了placement项目
• 4.3.在控制节点安装nova相关服务
o 1)安装nova相关软件包
o 2)快速修改nova配置
o 3)修改nova的虚拟主机配置文件
• 4.4.同步nova数据(注意同步顺序)
o 1)初始化nova-api和placement数据库
o 2)初始化nova_cell0和nova数据库
o 5)检查确认cell0和cell1注册成功
• 4.5.启动nova服务
o 1)启动nova服务并设置为开机自启动


上一篇文章分享了glance镜像服务的安装配置,本文主要分享openstack的计算服务Nova的安装和配制方法
------------------ 完美的分割线 ---------------------
nova相关端口:
api:8774
metadata:8775
novncproxy:6080
回到顶部
4.1.在控制节点安装nova计算服务
1)创建nova相关数据库

nova服务在本版本新增加了两个数据库,需要注意

mysql -u root -p123456

CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
CREATE DATABASE placement;

GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@‘localhost’ IDENTIFIED BY ‘nova’;
GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova’@’%’ IDENTIFIED BY ‘nova’;

GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@‘localhost’ IDENTIFIED BY ‘nova’;
GRANT ALL PRIVILEGES ON nova.* TO ‘nova’@’%’ IDENTIFIED BY ‘nova’;

GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@‘localhost’ IDENTIFIED BY ‘nova’;
GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova’@’%’ IDENTIFIED BY ‘nova’;

GRANT ALL PRIVILEGES ON placement.* TO ‘placement’@‘localhost’ IDENTIFIED BY ‘placement’;
GRANT ALL PRIVILEGES ON placement.* TO ‘placement’@’%’ IDENTIFIED BY ‘placement’;

flush privileges;
show databases;
select user,host from mysql.user;
exit

回到顶部
4.2.在keystone上面注册nova服务

创建服务证书

1)在keystone上创建nova用户
cd /server/tools
source keystone-admin-pass.sh
openstack user create --domain default --password=nova nova
openstack user list
2)在keystone上将nova用户配置为admin角色并添加进service项目

以下命令无输出

openstack role add --project service --user nova admin
3)创建nova计算服务的实体
openstack service create --name nova --description “OpenStack Compute” compute
openstack service list
4)创建计算服务的API端点(endpoint)

计算服务compute

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
openstack endpoint list
5)这个版本的nova增加了placement项目

同样,创建并注册该项目的服务证书

openstack user create --domain default --password=placement placement
openstack role add --project service --user placement admin
openstack service create --name placement --description “Placement API” placement

创建placement项目的endpoint(API端口)

openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
openstack endpoint list

完毕

回到顶部
4.3.在控制节点安装nova相关服务
1)安装nova相关软件包
yum install openstack-nova-api openstack-nova-conductor
openstack-nova-console openstack-nova-novncproxy
openstack-nova-scheduler openstack-nova-placement-api -y
2)快速修改nova配置

openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.1.81
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:openstack@controller
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:nova@controller/nova_api
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:nova@controller/nova
openstack-config --set /etc/nova/nova.conf placement_database connection mysql+pymysql://placement:placement@controller/placement
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password nova
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen ‘myipopenstackconfigset/etc/nova/nova.confvncserverproxyclientaddressmy_ip' openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address 'my_ip’
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password placement
openstack-config --set /etc/nova/nova.conf scheduler discover_hosts_in_cells_interval 300

默认情况下,计算服务使用内置的防火墙服务。由于网络服务包含了防火墙服务,必须使用nova.virt.firewall.NoopFirewallDriver防火墙服务来禁用掉计算服务内置的防火墙服务

检查生效的nova配置

egrep -v “#|$” /etc/nova/nova.conf

实例演示:

[root@openstack01 tools]# egrep -v "#|" /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata my_ip = 192.168.1.81 use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver transport_url = rabbit://openstack:openstack@controller [api] auth_strategy = keystone [api_database] connection = mysql+pymysql://nova:nova@controller/nova_api [barbican] [cache] [cells] [cinder] [compute] [conductor] [console] [consoleauth] [cors] [database] connection = mysql+pymysql://nova:nova@controller/nova [devices] [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = http://controller:9292 [guestfs] [healthcheck] [hyperv] [ironic] [key_manager] [keystone] [keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = nova [libvirt] [matchmaker_redis] [metrics] [mks] [neutron] [notifications] [osapi_v21] [oslo_concurrency] lock_path = /var/lib/nova/tmp [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci] [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = placement [placement_database] connection = mysql+pymysql://placement:placement@controller/placement [powervm] [profiler] [quota] [rdp] [remote_debug] [scheduler] discover_hosts_in_cells_interval = 300      # 服务端的计算节点多久去检查一次新加入的host主机信息,可以自动将安装好的计算节点主机加入集群 [serial_console] [service_user] [spice] [upgrade_levels] [vault] [vendordata_dynamic_auth] [vmware] [vnc] enabled = true server_listen =my_ip
server_proxyclient_address =$my_ip
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]

以上是生效的配置

3)修改nova的虚拟主机配置文件

由于有个包的bug需要配置修改文件,需要修改nova虚拟主机配置文件,增加内容,完整的文件内容如下:

vim /etc/httpd/conf.d/00-nova-placement-api.conf

Listen 8778

<VirtualHost *:8778>
WSGIProcessGroup nova-placement-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
WSGIDaemonProcess nova-placement-api processes=3 threads=1 user=nova group=nova
WSGIScriptAlias / /usr/bin/nova-placement-api
= 2.4>
ErrorLogFormat “%M”

ErrorLog /var/log/nova/nova-placement-api.log
#SSLEngine On
#SSLCertificateFile …
#SSLCertificateKeyFile …

Alias /nova-placement-api /usr/bin/nova-placement-api
<Location /nova-placement-api>
SetHandler wsgi-script
Options +ExecCGI
WSGIProcessGroup nova-placement-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On

made by zhaoshuai

<Directory /usr/bin>
= 2.4>
Require all granted

<IfVersion < 2.4>
Order allow,deny
Allow from all

修改完毕重启httpd服务

systemctl restart httpd
systemctl status httpd

实例演示:

[root@openstack01 conf.d]# systemctl restart httpd
[root@openstack01 conf.d]# systemctl status httpd
● httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; enabled; vendor preset: disabled)
Active: active (running) since 一 2018-10-29 13:56:03 CST; 134ms ago
Docs: man:httpd(8)
man:apachectl(8)
Process: 55849 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=0/SUCCESS)
Main PID: 55861 (httpd)
Status: “Processing requests…”
CGroup: /system.slice/httpd.service
├─55861 /usr/sbin/httpd -DFOREGROUND
├─55862 /usr/sbin/httpd -DFOREGROUND
├─55863 /usr/sbin/httpd -DFOREGROUND
├─55864 /usr/sbin/httpd -DFOREGROUND
├─55865 (wsgi:keystone- -DFOREGROUND
├─55866 (wsgi:keystone- -DFOREGROUND
├─55867 (wsgi:keystone- -DFOREGROUND
├─55868 (wsgi:keystone- -DFOREGROUND
├─55869 (wsgi:keystone- -DFOREGROUND
├─55870 /usr/sbin/httpd -DFOREGROUND
├─55871 /usr/sbin/httpd -DFOREGROUND
├─55873 /usr/sbin/httpd -DFOREGROUND
├─55874 /usr/sbin/httpd -DFOREGROUND
└─55875 /usr/sbin/httpd -DFOREGROUND

10月 29 13:56:03 openstack01.zuiyoujie.com systemd[1]: Starting The Apache HTTP Server…
10月 29 13:56:03 openstack01.zuiyoujie.com systemd[1]: Started The Apache HTTP Server.

至此,nova计算服务的软件包安装完成

回到顶部
4.4.同步nova数据(注意同步顺序)

nova_api有32张表,placement有32张表,nova_cell0有110张表,nova也有110张表

1)初始化nova-api和placement数据库
su -s /bin/sh -c “nova-manage api_db sync” nova

验证数据库

mysql -h192.168.1.81 -unova -pnova -e “use nova_api;show tables;”
mysql -h192.168.1.81 -uplacement -pplacement -e “use placement;show tables;”

实例演示:

[root@openstack01 tools]# su -s /bin/sh -c “nova-manage api_db sync” nova
[root@openstack01 tools]# mysql -h192.168.1.81 -unova -pnova -e “use nova_api;show tables;”
±-----------------------------+
| Tables_in_nova_api |
±-----------------------------+
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| allocations |
| build_requests |
| cell_mappings |
| consumers |
| flavor_extra_specs |
| flavor_projects |
| flavors |
| host_mappings |
| instance_group_member |
| instance_group_policy |
| instance_groups |
| instance_mappings |
| inventories |
| key_pairs |
| migrate_version |
| placement_aggregates |
| project_user_quotas |
| projects |
| quota_classes |
| quota_usages |
| quotas |
| request_specs |
| reservations |
| resource_classes |
| resource_provider_aggregates |
| resource_provider_traits |
| resource_providers |
| traits |
| users |
±-----------------------------+
[root@openstack01 tools]# mysql -h192.168.1.81 -uplacement -pplacement -e “use placement;show tables;”
±-----------------------------+
| Tables_in_placement |
±-----------------------------+
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| allocations |
| build_requests |
| cell_mappings |
| consumers |
| flavor_extra_specs |
| flavor_projects |
| flavors |
| host_mappings |
| instance_group_member |
| instance_group_policy |
| instance_groups |
| instance_mappings |
| inventories |
| key_pairs |
| migrate_version |
| placement_aggregates |
| project_user_quotas |
| projects |
| quota_classes |
| quota_usages |
| quotas |
| request_specs |
| reservations |
| resource_classes |
| resource_provider_aggregates |
| resource_provider_traits |
| resource_providers |
| traits |
| users |
±-----------------------------+

通过对比可知,nova_api和placement都有32张表,区别在于nova_api数据库的cell_mappings表多两条数据,存放的是nova和rabbitmq等的配置信息

2)初始化nova_cell0和nova数据库

注册cell0数据库

su -s /bin/sh -c “nova-manage cell_v2 map_cell0” nova

创建cell1单元

su -s /bin/sh -c “nova-manage cell_v2 create_cell --name=cell1 --verbose” nova

初始化nova数据库

su -s /bin/sh -c “nova-manage db sync” nova

检查确认cell0和cell1注册成功

su -s /bin/sh -c “nova-manage cell_v2 list_cells” nova

验证数据库

mysql -h192.168.1.81 -unova -pnova -e “use nova_cell0;show tables;”
mysql -h192.168.1.81 -unova -pnova -e “use nova;show tables;”

实例演示:

[root@openstack01 tools]# su -s /bin/sh -c “nova-manage cell_v2 map_cell0” nova
[root@openstack01 tools]# su -s /bin/sh -c “nova-manage cell_v2 create_cell --name=cell1 --verbose” nova
c078477e-cb43-40c9-ad8b-a9fde183747d
[root@openstack01 tools]# su -s /bin/sh -c “nova-manage db sync” nova      # 这里遇到两个警告信息,不是很严重,后续版本会修复,再重新执行一下就不会报了
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u’Duplicate index block_device_mapping_instance_uuid_virtual_name_device_name_idx. This is deprecated and will be disallowed in a future release.’)
result = self._query(query)
/usr/lib/python2.7/site-packages/pymysql/cursors.py:170: Warning: (1831, u’Duplicate index uniq_instances0uuid. This is deprecated and will be disallowed in a future release.’)
result = self._query(query)
[root@openstack01 tools]# mysql -h192.168.1.81 -unova -pnova -e “use nova_cell0;show tables;”
±-------------------------------------------+
| Tables_in_nova_cell0 |
±-------------------------------------------+
| agent_builds |
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| allocations |
| block_device_mapping |
| bw_usage_cache |
| cells |
| certificates |
| compute_nodes |
| console_auth_tokens |
| console_pools |
| consoles |
| dns_domains |
| fixed_ips |
| floating_ips |
| instance_actions |
| instance_actions_events |
| instance_extra |
| instance_faults |
| instance_group_member |
| instance_group_policy |
| instance_groups |
| instance_id_mappings |
| instance_info_caches |
| instance_metadata |
| instance_system_metadata |
| instance_type_extra_specs |
| instance_type_projects |
| instance_types |
| instances |
| inventories |
| key_pairs |
| migrate_version |
| migrations |
| networks |
| pci_devices |
| project_user_quotas |
| provider_fw_rules |
| quota_classes |
| quota_usages |
| quotas |
| reservations |
| resource_provider_aggregates |
| resource_providers |
| s3_images |
| security_group_default_rules |
| security_group_instance_association |
| security_group_rules |
| security_groups |
| services |
| shadow_agent_builds |
| shadow_aggregate_hosts |
| shadow_aggregate_metadata |
| shadow_aggregates |
| shadow_block_device_mapping |
| shadow_bw_usage_cache |
| shadow_cells |
| shadow_certificates |
| shadow_compute_nodes |
| shadow_console_pools |
| shadow_consoles |
| shadow_dns_domains |
| shadow_fixed_ips |
| shadow_floating_ips |
| shadow_instance_actions |
| shadow_instance_actions_events |
| shadow_instance_extra |
| shadow_instance_faults |
| shadow_instance_group_member |
| shadow_instance_group_policy |
| shadow_instance_groups |
| shadow_instance_id_mappings |
| shadow_instance_info_caches |
| shadow_instance_metadata |
| shadow_instance_system_metadata |
| shadow_instance_type_extra_specs |
| shadow_instance_type_projects |
| shadow_instance_types |
| shadow_instances |
| shadow_key_pairs |
| shadow_migrate_version |
| shadow_migrations |
| shadow_networks |
| shadow_pci_devices |
| shadow_project_user_quotas |
| shadow_provider_fw_rules |
| shadow_quota_classes |
| shadow_quota_usages |
| shadow_quotas |
| shadow_reservations |
| shadow_s3_images |
| shadow_security_group_default_rules |
| shadow_security_group_instance_association |
| shadow_security_group_rules |
| shadow_security_groups |
| shadow_services |
| shadow_snapshot_id_mappings |
| shadow_snapshots |
| shadow_task_log |
| shadow_virtual_interfaces |
| shadow_volume_id_mappings |
| shadow_volume_usage_cache |
| snapshot_id_mappings |
| snapshots |
| tags |
| task_log |
| virtual_interfaces |
| volume_id_mappings |
| volume_usage_cache |
±-------------------------------------------+
[root@openstack01 tools]# mysql -h192.168.1.81 -unova -pnova -e “use nova;show tables;”
±-------------------------------------------+
| Tables_in_nova |
±-------------------------------------------+
| agent_builds |
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| allocations |
| block_device_mapping |
| bw_usage_cache |
| cells |
| certificates |
| compute_nodes |
| console_auth_tokens |
| console_pools |
| consoles |
| dns_domains |
| fixed_ips |
| floating_ips |
| instance_actions |
| instance_actions_events |
| instance_extra |
| instance_faults |
| instance_group_member |
| instance_group_policy |
| instance_groups |
| instance_id_mappings |
| instance_info_caches |
| instance_metadata |
| instance_system_metadata |
| instance_type_extra_specs |
| instance_type_projects |
| instance_types |
| instances |
| inventories |
| key_pairs |
| migrate_version |
| migrations |
| networks |
| pci_devices |
| project_user_quotas |
| provider_fw_rules |
| quota_classes |
| quota_usages |
| quotas |
| reservations |
| resource_provider_aggregates |
| resource_providers |
| s3_images |
| security_group_default_rules |
| security_group_instance_association |
| security_group_rules |
| security_groups |
| services |
| shadow_agent_builds |
| shadow_aggregate_hosts |
| shadow_aggregate_metadata |
| shadow_aggregates |
| shadow_block_device_mapping |
| shadow_bw_usage_cache |
| shadow_cells |
| shadow_certificates |
| shadow_compute_nodes |
| shadow_console_pools |
| shadow_consoles |
| shadow_dns_domains |
| shadow_fixed_ips |
| shadow_floating_ips |
| shadow_instance_actions |
| shadow_instance_actions_events |
| shadow_instance_extra |
| shadow_instance_faults |
| shadow_instance_group_member |
| shadow_instance_group_policy |
| shadow_instance_groups |
| shadow_instance_id_mappings |
| shadow_instance_info_caches |
| shadow_instance_metadata |
| shadow_instance_system_metadata |
| shadow_instance_type_extra_specs |
| shadow_instance_type_projects |
| shadow_instance_types |
| shadow_instances |
| shadow_key_pairs |
| shadow_migrate_version |
| shadow_migrations |
| shadow_networks |
| shadow_pci_devices |
| shadow_project_user_quotas |
| shadow_provider_fw_rules |
| shadow_quota_classes |
| shadow_quota_usages |
| shadow_quotas |
| shadow_reservations |
| shadow_s3_images |
| shadow_security_group_default_rules |
| shadow_security_group_instance_association |
| shadow_security_group_rules |
| shadow_security_groups |
| shadow_services |
| shadow_snapshot_id_mappings |
| shadow_snapshots |
| shadow_task_log |
| shadow_virtual_interfaces |
| shadow_volume_id_mappings |
| shadow_volume_usage_cache |
| snapshot_id_mappings |
| snapshots |
| tags |
| task_log |
| virtual_interfaces |
| volume_id_mappings |
| volume_usage_cache |
±-------------------------------------------+

通过对比可知,这两个数据库的表目前完全一样,区别在于nova数据库的service表中有4条数据,存放的是当前版本nova相关服务的注册信息

5)检查确认cell0和cell1注册成功
su -s /bin/sh -c “nova-manage cell_v2 list_cells” nova

实例演示:

[root@openstack01 conf.d]# su -s /bin/sh -c “nova-manage cell_v2 list_cells” nova
±------±-------------------------------------±-----------------------------------±------------------------------------------------±---------+
| 名称 | UUID | Transport URL | 数据库连接 | Disabled |
±------±-------------------------------------±-----------------------------------±------------------------------------------------±---------+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:@controller/nova_cell0 | False |
| cell1 | c078477e-cb43-40c9-ad8b-a9fde183747d | rabbit://openstack:
@controller | mysql+pymysql://nova:****@controller/nova | False |
±------±-------------------------------------±-----------------------------------±------------------------------------------------±---------+

返回的数据存储在nova_api数据库的cell_mappings表中

回到顶部
4.5.启动nova服务
1)启动nova服务并设置为开机自启动

需要启动5个服务

systemctl start openstack-nova-api.service openstack-nova-consoleauth.service
openstack-nova-scheduler.service openstack-nova-conductor.service
openstack-nova-novncproxy.service

systemctl status openstack-nova-api.service openstack-nova-consoleauth.service
openstack-nova-scheduler.service openstack-nova-conductor.service
openstack-nova-novncproxy.service

systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service
openstack-nova-scheduler.service openstack-nova-conductor.service
openstack-nova-novncproxy.service

systemctl list-unit-files |grep openstack-nova* |grep enabled

实例演示:

[root@openstack01 conf.d]# systemctl start openstack-nova-api.service \

openstack-nova-scheduler.service openstack-nova-conductor.service
openstack-nova-novncproxy.service
[root@openstack01 conf.d]# systemctl status openstack-nova-api.service
openstack-nova-scheduler.service openstack-nova-conductor.service
openstack-nova-novncproxy.service
● openstack-nova-api.service - OpenStack Nova API Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-api.service; disabled; vendor preset: disabled)
Active: active (running) since 一 2018-10-29 14:30:22 CST; 6s ago
Main PID: 56510 (nova-api)
CGroup: /system.slice/openstack-nova-api.service
├─56510 /usr/bin/python2 /usr/bin/nova-api
├─56562 /usr/bin/python2 /usr/bin/nova-api
└─56564 /usr/bin/python2 /usr/bin/nova-api

10月 29 14:30:06 openstack01.zuiyoujie.com systemd[1]: Starting OpenStack Nova API Server…
10月 29 14:30:22 openstack01.zuiyoujie.com systemd[1]: Started OpenStack Nova API Server.

● openstack-nova-scheduler.service - OpenStack Nova Scheduler Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-scheduler.service; disabled; vendor preset: disabled)
Active: active (running) since 一 2018-10-29 14:30:21 CST; 8s ago
Main PID: 56511 (nova-scheduler)
CGroup: /system.slice/openstack-nova-scheduler.service
└─56511 /usr/bin/python2 /usr/bin/nova-scheduler

10月 29 14:30:06 openstack01.zuiyoujie.com systemd[1]: Starting OpenStack Nova Scheduler Server…
10月 29 14:30:21 openstack01.zuiyoujie.com systemd[1]: Started OpenStack Nova Scheduler Server.

● openstack-nova-conductor.service - OpenStack Nova Conductor Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-conductor.service; disabled; vendor preset: disabled)
Active: active (running) since 一 2018-10-29 14:30:19 CST; 9s ago
Main PID: 56512 (nova-conductor)
CGroup: /system.slice/openstack-nova-conductor.service
└─56512 /usr/bin/python2 /usr/bin/nova-conductor

10月 29 14:30:06 openstack01.zuiyoujie.com systemd[1]: Starting OpenStack Nova Conductor Server…
10月 29 14:30:19 openstack01.zuiyoujie.com systemd[1]: Started OpenStack Nova Conductor Server.

● openstack-nova-novncproxy.service - OpenStack Nova NoVNC Proxy Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-novncproxy.service; disabled; vendor preset: disabled)
Active: active (running) since 一 2018-10-29 14:30:06 CST; 22s ago
Main PID: 56513 (nova-novncproxy)
CGroup: /system.slice/openstack-nova-novncproxy.service
└─56513 /usr/bin/python2 /usr/bin/nova-novncproxy --web /usr/share/novnc/

10月 29 14:30:06 openstack01.zuiyoujie.com systemd[1]: Started OpenStack Nova NoVNC Proxy Server.
10月 29 14:30:06 openstack01.zuiyoujie.com systemd[1]: Starting OpenStack Nova NoVNC Proxy Server…
[root@openstack01 conf.d]# systemctl enable openstack-nova-api.service \

openstack-nova-scheduler.service openstack-nova-conductor.service
openstack-nova-novncproxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-api.service to /usr/lib/systemd/system/openstack-nova-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-scheduler.service to /usr/lib/systemd/system/openstack-nova-scheduler.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-conductor.service to /usr/lib/systemd/system/openstack-nova-conductor.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service to /usr/lib/systemd/system/openstack-nova-novncproxy.service.
[root@openstack01 conf.d]# systemctl list-unit-files |grep openstack-nova* |grep enabled
openstack-nova-api.service enabled
openstack-nova-conductor.service enabled
openstack-nova-novncproxy.service enabled
openstack-nova-scheduler.service enabled

至此,在控制节点安装nova计算服务就完成了,下篇文章介绍独立的nova计算节点的安装方法

CentOS7安装OpenStack(Rocky版)-05.安装一个nova计算节点实例
阅读目录
• 5.1.配置域名解析
o 1)配置主机名
o 2)配置主机名解析
• 5.2.关闭防火墙和selinux
o 1)关闭iptables
o 2)关闭 selinux
• 5.3.配置时间同步
o 1)在在计算节点配置时间同步服务
o 2)编辑配置文件确认有以下配置
o 3)重启chronyd服务,并配置开机自启动
o 4)设置时区,首次同步时间
• 5.4.配置相关yum源
o 1)配置阿里云的base和epel源
o 2)安装openstack-rocky的仓库
o 3)更新软件包
o 4)安装openstack客户端相关软件
• 5.5.安装nova计算节点相关软件包
o 1)计算节点安装nova软件包
o 2)快速修改配置文件(/etc/nova/nova.conf)
o 3)配置虚拟机的硬件加速
o 4)启动nova相关服务,并配置为开机自启动
o 5)将计算节点增加到cell数据库
• 5.6.在控制节点进行验证
o 1)应用管理员环境变量脚本
o 2)列表查看安装的nova服务组件
o 3)在身份认证服务中列出API端点以验证其连接性
o 4)在镜像服务中列出已有镜像已检查镜像服务的连接性
o 5)检查nova各组件的状态


上一篇文章分享了控制节点的nova计算服务的安装方法,在实际生产环境中,计算节点通常会安装一些单独的节点提供服务,本文分享单独的nova计算节点的安装方法
---------------- 完美的分割线 -----------------

参考文章:

https://docs.openstack.org/install-guide/environment.html
https://docs.openstack.org/nova/rocky/install/compute-install-rdo.html
计算节点的配置方法与控制节点基本相同,只是在时间同步上需要连接控制节点,保证openstack集群内的服务器时间一致,否则会出现问题,需要注意
回到顶部
5.1.配置域名解析
1)配置主机名

主机名设置好就不能修改,否则会出问题,控制节点和计算节点配置相同,且都需要配置

hostname openstack02.zuiyoujie.com
hostname
echo “openstack02.zuiyoujie.com”> /etc/hostname
cat /etc/hostname
2)配置主机名解析
vim /etc/hosts

192.168.1.81 openstack01.zuiyoujie.com controller
192.168.1.82 openstack02.zuiyoujie.com compute02 block02 object02

依然是将简单的集群内角色名解析也配置上去

回到顶部
5.2.关闭防火墙和selinux
1)关闭iptables

在CentOS7上面是firewalld

systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl status firewalld.service
2)关闭 selinux
setenforce 0
getenforce
sed -i ‘s#SELINUX=enforcing#SELINUX=disabled#g’ /etc/sysconfig/selinux
grep SELINUX=disabled /etc/sysconfig/selinux
回到顶部
5.3.配置时间同步
1)在在计算节点配置时间同步服务

安装时间同步的软件包

yum install chrony -y
2)编辑配置文件确认有以下配置
vim /etc/chrony.conf

修改引用控制节点openstack01的IP

server 192.168.1.81 iburst

3)重启chronyd服务,并配置开机自启动
systemctl restart chronyd.service
systemctl status chronyd.service
systemctl enable chronyd.service
systemctl list-unit-files |grep chronyd.service
4)设置时区,首次同步时间
timedatectl set-timezone Asia/Shanghai
chronyc sources
timedatectl status

至此,时间同步配置完成

回到顶部
5.4.配置相关yum源
1)配置阿里云的base和epel源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.backup
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
2)安装openstack-rocky的仓库

很显然,计算节点也需要安装openstack的yum源

yum install centos-release-openstack-rocky -y
yum clean all
yum makecache

也可以手动创建OpenStack的阿里云yum源地址

vim /etc/yum.repos.d/CentOS-OpenStack-Rocky.repo

[centos-openstack-rocky]
name=CentOS-7 - OpenStack rocky
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/openstack-rocky/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud

[centos-openstack-rocky-test]
name=CentOS-7 - OpenStack rocky Testing
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/openstack-rocky/
gpgcheck=0
enabled=0

[centos-openstack-rocky-debuginfo]
name=CentOS-7 - OpenStack rocky - Debug
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud

[centos-openstack-rocky-source]
name=CentOS-7 - OpenStack rocky - Source
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/openstack-rocky/
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Cloud

[rdo-trunk-rocky-tested]
name=OpenStack rocky Trunk Tested
baseurl=http://mirrors.aliyun.com/centos/7/cloud/$basearch/rdo-trunk-rocky-tested/
gpgcheck=0
enabled=0

3)更新软件包
yum update -y
4)安装openstack客户端相关软件
yum install python-openstackclient openstack-selinux -y

至此,openstack计算节点的系统环境配置完成,虚拟机的话可以关机做下快照

回到顶部
5.5.安装nova计算节点相关软件包
1)计算节点安装nova软件包
cd /server/tools
yum install openstack-nova-compute python-openstackclient openstack-utils -y
2)快速修改配置文件(/etc/nova/nova.conf)

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.1.82
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron True
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:openstack@controller
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password nova
openstack-config --set /etc/nova/nova.conf vnc enabled True
openstack-config --set /etc/nova/nova.conf vnc server_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ‘$my_ip’
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password placement

服务器组件监听所有的 IP 地址,而代理组件仅仅监听计算节点管理网络接口的 IP 地址。

查看生效的配置:

egrep -v “#|$” /etc/nova/nova.conf

[root@openstack02 nova]# egrep -v “#|$” /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@controller
my_ip = 192.168.1.82
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
log_date_format=%Y-%m-%d %H:%M:%S
log_file=nova-compute.log
log_dir=/var/log/nova
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[libvirt]
virt_type = qemu
[matchmaker_redis]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement
[placement_database]
[powervm]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = 192.168.1.82
novncproxy_base_url = http://controller:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]

ok

3)配置虚拟机的硬件加速

首先确定您的计算节点是否支持虚拟机的硬件加速。

egrep -c ‘(vmx|svm)’ /proc/cpuinfo

如果返回位0,表示计算节点不支持硬件加速,需要配置libvirt使用QEMU方式管理虚拟机,使用以下命令:

openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
egrep -v “#|$” /etc/nova/nova.conf|grep ‘virt_type’

如果返回为其他值,表示计算节点支持硬件加速且不需要额外的配置,使用以下命令:

openstack-config --set /etc/nova/nova.conf libvirt virt_type kvm
egrep -v “#|$” /etc/nova/nova.conf|grep ‘virt_type’
4)启动nova相关服务,并配置为开机自启动

需要启动2个服务

systemctl start libvirtd.service openstack-nova-compute.service
systemctl status libvirtd.service openstack-nova-compute.service
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl list-unit-files |grep libvirtd.service
systemctl list-unit-files |grep openstack-nova-compute.service
5)将计算节点增加到cell数据库

以下命令在控制节点操作:

cd /server/tools
source keystone-admin-pass.sh

检查确认数据库有新的计算节点

openstack compute service list --service nova-compute

实例演示:

[root@openstack01 tools]# openstack compute service list --service nova-compute
openstack compute service list
±—±-------------±--------------------------±-----±--------±------±---------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
±—±-------------±--------------------------±-----±--------±------±---------------------------+
| 6 | nova-compute | openstack02.zuiyoujie.com | nova | enabled | up | 2018-10-29T12:02:40.000000 |
±—±-------------±--------------------------±-----±--------±------±---------------------------+

手动将新的计算节点添加到openstack集群

su -s /bin/sh -c “nova-manage cell_v2 discover_hosts --verbose” nova

实例演示:

[root@openstack01 tools]# su -s /bin/sh -c “nova-manage cell_v2 discover_hosts --verbose” nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell ‘cell1’: c078477e-cb43-40c9-ad8b-a9fde183747d
Found 0 unmapped computes in cell: c078477e-cb43-40c9-ad8b-a9fde183747d

设置新创建节点自动注册的任务(已经添加到配置文件中)

[scheduler]
discover_hosts_in_cells_interval = 300

至此,计算节点安装完毕,接下来需要进行测试,检查nova节点的状态

回到顶部
5.6.在控制节点进行验证

参考文章:https://docs.openstack.org/nova/rocky/install/compute-install-rdo.html

1)应用管理员环境变量脚本
cd /server/tools
source keystone-admin-pass.sh
2)列表查看安装的nova服务组件

验证是否成功注册并启动了每个进程

openstack compute service list

实例演示:

[root@openstack01 tools]# openstack compute service list
±—±-----------------±--------------------------±---------±--------±------±---------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
±—±-----------------±--------------------------±---------±--------±------±---------------------------+
| 1 | nova-conductor | openstack01.zuiyoujie.com | internal | enabled | up | 2018-10-29T12:02:47.000000 |
| 2 | nova-scheduler | openstack01.zuiyoujie.com | internal | enabled | up | 2018-10-29T12:02:47.000000 |
| 5 | nova-consoleauth | openstack01.zuiyoujie.com | internal | enabled | up | 2018-10-29T12:02:42.000000 |
| 6 | nova-compute | openstack02.zuiyoujie.com | nova | enabled | up | 2018-10-29T12:02:40.000000 |
±—±-----------------±--------------------------±---------±--------±------±---------------------------+

3)在身份认证服务中列出API端点以验证其连接性
openstack catalog list

实例演示:

[root@openstack01 tools]# openstack catalog list
±----------±----------±----------------------------------------+
| Name | Type | Endpoints |
±----------±----------±----------------------------------------+
| keystone | identity | RegionOne |
| | | admin: http://controller:5000/v3/ |
| | | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | |
| glance | image | RegionOne |
| | | admin: http://controller:9292 |
| | | RegionOne |
| | | internal: http://controller:9292 |
| | | RegionOne |
| | | public: http://controller:9292 |
| | | |
| nova | compute | RegionOne |
| | | internal: http://controller:8774/v2.1 |
| | | RegionOne |
| | | admin: http://controller:8774/v2.1 |
| | | RegionOne |
| | | public: http://controller:8774/v2.1 |
| | | |
| placement | placement | RegionOne |
| | | public: http://controller:8778 |
| | | RegionOne |
| | | admin: http://controller:8778 |
| | | RegionOne |
| | | internal: http://controller:8778 |
| | | |
±----------±----------±----------------------------------------+

4)在镜像服务中列出已有镜像已检查镜像服务的连接性
openstack image list

实例演示:

[root@openstack01 tools]# openstack image list
±-------------------------------------±-------±-------+
| ID | Name | Status |
±-------------------------------------±-------±-------+
| 78f5671b-fb2d-494f-8da7-25dbe425cad6 | cirros | active |
±-------------------------------------±-------±-------+

5)检查nova各组件的状态

检查placement API和cell服务是否正常工作

nova-status upgrade check

实例演示:

[root@openstack01 tools]# nova-status upgrade check
±------------------------------+
| 升级检查结果 |
±------------------------------+
| 检查: Cells v2 |
| 结果: 成功 |
| 详情: None |
±------------------------------+
| 检查: Placement API |
| 结果: 成功 |
| 详情: None |
±------------------------------+
| 检查: Resource Providers |
| 结果: 成功 |
| 详情: None |
±------------------------------+
| 检查: Ironic Flavor Migration |
| 结果: 成功 |
| 详情: None |
±------------------------------+
| 检查: API Service Version |
| 结果: 成功 |
| 详情: None |
±------------------------------+
| 检查: Request Spec Migration |
| 结果: 成功 |
| 详情: None |
±------------------------------+

至此,nova计算节点,安装完毕并添加到openstack集群中

CentOS7安装OpenStack(Rocky版)-06.安装Neutron网络服务(控制节点)
阅读目录
• 6.0.Neutron概述
• 6.1.主机网络配置及测试
o 1)控制节点配置
o 2)计算节点配置
o 3)块存储节点配置
o 4)检测各节点到控制节点和公网的联通性
• 6.2.在keystone数据库中注册neutron相关服务
o 1)创建neutron数据库,授予合适的访问权限
o 2)在keystone上创建neutron用户
o 3)将neutron添加到service项目并授予admin角色
o 4)创建neutron服务实体
o 5)创建neutron网络服务的API端点(endpoint)
• 6.3.在控制节点安装neutron网络组件
o 1)安装neutron软件包
o 2)快速配置/etc/neutron/neutron.conf
o 3)快速配置/etc/neutron/plugins/ml2/ml2_conf.ini
o 4)快速配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini
o 5)快速配置/etc/neutron/dhcp_agent.ini
o 6)快速配置/etc/neutron/metadata_agent.ini
o 7)配置计算服务使用网络服务
o 8)初始化安装网络插件
o 9)同步数据库
o 10)重启nova_api服务
o 11)启动neutron服务并设置开机启动
• 6.4.在计算节点安装neutron网络组件
o 1)安装neutron组件
o 2)快速配置/etc/neutron/neutron.conf
o 3)快速配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini
o 4)配置nova计算服务与neutron网络服务协同工作
o 5)重启计算节点
o 6)启动neutron网络组件,并配置开机自启动
• 6.5.在控制节点检查确认neutron服务安装成功
o 1)获取管理权限
o 2)列表查看加载的网络插件
o 3)查看网络代理列表


上一章介绍了独立的nova计算节点的安装方法,本章分享openstack的网络服务neutron的安装配制方法
------------------- 完美的分割线 ---------------------
回到顶部
6.0.Neutron概述
OpenStack Networking(neutron),允许创建、插入接口设备,这些设备由其他的OpenStack服务管理。插件式的实现可以容纳不同的网络设备和软件,为OpenStack架构与部署提供了灵活性。
它包含下列组件:

neutron-server
接收和路由API请求到合适的OpenStack网络插件,以达到预想的目的。

OpenStack网络插件和代理
插拔端口,创建网络和子网,以及提供IP地址,这些插件和代理依赖于供应商和技术而不同,OpenStack网络基于插件和代理为Cisco 虚拟和物理交换机、NEC OpenFlow产品,Open vSwitch,Linux bridging以及VMware NSX 产品穿线搭桥。

常见的代理L3(3层),DHCP(动态主机IP地址),以及插件代理。

消息队列
大多数的OpenStack Networking安装都会用到,用于在neutron-server和各种各样的代理进程间路由信息。也为某些特定的插件扮演数据库的角色,以存储网络状态

OpenStack网络主要和OpenStack计算交互,以提供网络连接到它的实例。

回到顶部
6.1.主机网络配置及测试

参考文章:Install and configure controller node

https://docs.openstack.org/neutron/rocky/install/install-rdo.html
1)控制节点配置

vim /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.81 openstack01.zuiyoujie.com controller
192.168.1.82 openstack02.zuiyoujie.com compute02 block02 object02

2)计算节点配置

vim /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.81 openstack01.zuiyoujie.com controller
192.168.1.82 openstack02.zuiyoujie.com compute02 block02 object02

3)块存储节点配置

vim /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.81 openstack01.zuiyoujie.com controller
192.168.1.82 openstack02.zuiyoujie.com compute02 block02 object02

以上节点的hosts文件配置相同,其他类型节点也照此配置即可

4)检测各节点到控制节点和公网的联通性

控制节点

ping -c 4 www.baidu.com
ping -c 4 compute02
ping -c 4 block02

计算节点

ping -c 4 www.baidu.com
ping -c 4 controller

回到顶部
6.2.在keystone数据库中注册neutron相关服务
1)创建neutron数据库,授予合适的访问权限

mysql -p123456

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@‘localhost’ IDENTIFIED BY ‘neutron’;
GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’%’ IDENTIFIED BY ‘neutron’;
exit

2)在keystone上创建neutron用户
cd /server/tools
source keystone-admin-pass.sh
openstack user create --domain default --password=neutron neutron
openstack user list

实例演示:

[root@openstack01 tools]# openstack user create --domain default --password=neutron neutron
±--------------------±---------------------------------+
| Field | Value |
±--------------------±---------------------------------+
| domain_id | default |
| enabled | True |
| id | dd35b7396aa94342a01c807aaa707d21 |
| name | neutron |
| options | {} |
| password_expires_at | None |
±--------------------±---------------------------------+

[root@openstack01 tools]# openstack user list
±---------------------------------±----------+
| ID | Name |
±---------------------------------±----------+
| 26f88ba142d04735936d09caa7c76284 | placement |
| 82a27e65ca644a5eadcd54ff44e5e05b | glance |
| cbb2b3830a8f44bc837230bca27ae563 | myuser |
| cc55913a3da44a38939cdc7a2ec764cc | nova |
| dd35b7396aa94342a01c807aaa707d21 | neutron |
| e5dbfc8b394c41679fd5ce229cdd6ed3 | admin |
±---------------------------------±----------+

ok

3)将neutron添加到service项目并授予admin角色

以下命令无输出

openstack role add --project service --user neutron admin
4)创建neutron服务实体
openstack service create --name neutron --description “OpenStack Networking” network
openstack service list

实例演示:

[root@openstack01 tools]# openstack service create --name neutron --description “OpenStack Networking” network
±------------±---------------------------------+
| Field | Value |
±------------±---------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | 90b5d791df5e4634848c00ba35390865 |
| name | neutron |
| type | network |
±------------±---------------------------------+
[root@openstack01 tools]# openstack service list
±---------------------------------±----------±----------+
| ID | Name | Type |
±---------------------------------±----------±----------+
| 63c882889b204d81a9867f9b7c0ba7aa | keystone | identity |
| 6c31f22e259b460fa0168ac206265c30 | glance | image |
| 854ca66666c64e2fbeff1e9c5cc1c4df | nova | compute |
| 90b5d791df5e4634848c00ba35390865 | neutron | network |
| a79d818312b34c4c8879d7dbbd41a78c | placement | placement |
±---------------------------------±----------±----------+

ok

5)创建neutron网络服务的API端点(endpoint)
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
openstack endpoint list

实例演示:

[root@openstack01 tools]# openstack endpoint create --region RegionOne network public http://controller:9696
±-------------±---------------------------------+
| Field | Value |
±-------------±---------------------------------+
| enabled | True |
| id | ed17939d7623456bb203bb7197fc16c4 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 90b5d791df5e4634848c00ba35390865 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
±-------------±---------------------------------+
[root@openstack01 tools]# openstack endpoint create --region RegionOne network internal http://controller:9696
±-------------±---------------------------------+
| Field | Value |
±-------------±---------------------------------+
| enabled | True |
| id | 1cba9e89dc91422390a5b987dbeffdb6 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 90b5d791df5e4634848c00ba35390865 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
±-------------±---------------------------------+
[root@openstack01 tools]# openstack endpoint create --region RegionOne network admin http://controller:9696
±-------------±---------------------------------+
| Field | Value |
±-------------±---------------------------------+
| enabled | True |
| id | 2bcda9f77cdb4c06be6f35a3c3312e3d |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 90b5d791df5e4634848c00ba35390865 |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
±-------------±---------------------------------+
[root@openstack01 tools]# openstack endpoint list
±---------------------------------±----------±-------------±-------------±--------±----------±----------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
±---------------------------------±----------±-------------±-------------±--------±----------±----------------------------+
| 022711a6476648bda1446ecb7668f315 | RegionOne | placement | placement | True | public | http://controller:8778 |
| 1291aa2f71104ce69f9b05905fbc2c8a | RegionOne | placement | placement | True | admin | http://controller:8778 |
| 1cba9e89dc91422390a5b987dbeffdb6 | RegionOne | neutron | network | True | internal | http://controller:9696 |
| 2bcda9f77cdb4c06be6f35a3c3312e3d | RegionOne | neutron | network | True | admin | http://controller:9696 |
| 3f293d128470468683d5f82a66301232 | RegionOne | placement | placement | True | internal | http://controller:8778 |
| 43960ef2a79a45d49bfd22a2dbf4c2ce | RegionOne | nova | compute | True | internal | http://controller:8774/v2.1 |
| 7129fffdb2614227aca641b10635efdf | RegionOne | nova | compute | True | admin | http://controller:8774/v2.1 |
| 7226f8f9c7164214b815821b77ae3ce6 | RegionOne | glance | image | True | admin | http://controller:9292 |
| 756084d018c948039d2ae55b13fc7d4a | RegionOne | glance | image | True | internal | http://controller:9292 |
| 7f0461c745b340ef83372059782d22ee | RegionOne | nova | compute | True | public | http://controller:8774/v2.1 |
| b8dabe6c548e435eb2b1f7efe3b23236 | RegionOne | keystone | identity | True | admin | http://controller:5000/v3/ |
| eb72eb6ea51842feb67ba5849beea48c | RegionOne | keystone | identity | True | internal | http://controller:5000/v3/ |
| ed17939d7623456bb203bb7197fc16c4 | RegionOne | neutron | network | True | public | http://controller:9696 |
| f13c44af4e8d45d5b0229ea870f2c24f | RegionOne | glance | image | True | public | http://controller:9292 |
| f172f6159ad34fbd8e10e0d42828d8cd | RegionOne | keystone | identity | True | public | http://controller:5000/v3/ |
±---------------------------------±----------±-------------±-------------±--------±----------±----------------------------+

ok

回到顶部
6.3.在控制节点安装neutron网络组件

关于neutron的网络提供了两种方式:

https://docs.openstack.org/neutron/rocky/install/controller-install-option1-rdo.html
以下为第一种Networking Option 1: Provider networks
1)安装neutron软件包
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
2)快速配置/etc/neutron/neutron.conf

openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:neutron@controller/neutron
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:openstack@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password neutron
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf nova auth_type password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_name default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password nova
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

查看生效的配置

egrep -v ‘($|#)’ /etc/neutron/neutron.conf

[root@openstack01 tools]# egrep -v ‘($|#)’ /etc/neutron/neutron.conf
[DEFAULT]
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[agent]
[cors]
[database]
connection = mysql+pymysql://neutron:neutron@controller/neutron
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[matchmaker_redis]
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
[ssl]

3)快速配置/etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True

查看生效的配置

egrep -v ‘($|#)’ /etc/neutron/plugins/ml2/ml2_conf.ini

[root@openstack01 tools]# egrep -v ‘($|#)’ /etc/neutron/plugins/ml2/ml2_conf.ini
[DEFAULT]
[l2pop]
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
[ml2_type_vxlan]
[securitygroup]
enable_ipset = True

4)快速配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:eno16777736
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan False
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

查看生效的配置

egrep -v ‘($|#)’ /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[root@openstack01 tools]# egrep -v ‘($|#)’ /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = provider:eno16777736
[network_log]
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = False

以下参数在启动neutron-linuxbridge-agent.service的时候会自动设置为1

sysctl net.bridge.bridge-nf-call-iptables
sysctl net.bridge.bridge-nf-call-ip6tables
5)快速配置/etc/neutron/dhcp_agent.ini
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True

查看生效的配置

egrep -v ‘($|#)’ /etc/neutron/dhcp_agent.ini

[root@openstack01 tools]# egrep -v ‘($|#)’ /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
[agent]
[ovs]

至此,方式1的配置文件修改完毕

6)快速配置/etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret neutron

查看生效的配置

egrep -v ‘($|#)’ /etc/neutron/metadata_agent.ini

[root@openstack01 tools]# egrep -v ‘($|#)’ /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = neutron
[agent]
[cache]

metadata_proxy_shared_secret选项是元数据代理,需要设置一个合适的密码这里设置为neutron

7)配置计算服务使用网络服务

快速配置/etc/nova/nova.conf,将neutron添加到计算节点中

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password neutron
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy true
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret neutron

查看生效的配置

egrep -v ‘($|#)’ /etc/nova/nova.conf

[root@openstack01 tools]# egrep -v ‘($|#)’ /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
my_ip = 192.168.1.81
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
transport_url = rabbit://openstack:openstack@controller
[api]
auth_strategy = keystone
[api_database]
connection = mysql+pymysql://nova:nova@controller/nova_api
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
connection = mysql+pymysql://nova:nova@controller/nova
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = neutron
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement
[placement_database]
connection = mysql+pymysql://placement:placement@controller/placement
[powervm]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval = 300
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]

8)初始化安装网络插件

创建网络插件的链接,初始化网络的脚本插件会用到/etc/neutron/plugin.ini,需要使用ML2的插件进行提供

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
9)同步数据库
su -s /bin/sh -c “neutron-db-manage --config-file /etc/neutron/neutron.conf
–config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron

实例演示:

[root@openstack01 tools]# su -s /bin/sh -c “neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
正在对 neutron 运行 upgrade…
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> kilo
INFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225
INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151
INFO [alembic.runtime.migration] Running upgrade 599c6a226151 -> 52c5312f6baf
INFO [alembic.runtime.migration] Running upgrade 52c5312f6baf -> 313373c0ffee
INFO [alembic.runtime.migration] Running upgrade 313373c0ffee -> 8675309a5c4f
INFO [alembic.runtime.migration] Running upgrade 8675309a5c4f -> 45f955889773
INFO [alembic.runtime.migration] Running upgrade 45f955889773 -> 26c371498592
INFO [alembic.runtime.migration] Running upgrade 26c371498592 -> 1c844d1677f7
INFO [alembic.runtime.migration] Running upgrade 1c844d1677f7 -> 1b4c6e320f79
INFO [alembic.runtime.migration] Running upgrade 1b4c6e320f79 -> 48153cb5f051
INFO [alembic.runtime.migration] Running upgrade 48153cb5f051 -> 9859ac9c136
INFO [alembic.runtime.migration] Running upgrade 9859ac9c136 -> 34af2b5c5a59
INFO [alembic.runtime.migration] Running upgrade 34af2b5c5a59 -> 59cb5b6cf4d
INFO [alembic.runtime.migration] Running upgrade 59cb5b6cf4d -> 13cfb89f881a
INFO [alembic.runtime.migration] Running upgrade 13cfb89f881a -> 32e5974ada25
INFO [alembic.runtime.migration] Running upgrade 32e5974ada25 -> ec7fcfbf72ee
INFO [alembic.runtime.migration] Running upgrade ec7fcfbf72ee -> dce3ec7a25c9
INFO [alembic.runtime.migration] Running upgrade dce3ec7a25c9 -> c3a73f615e4
INFO [alembic.runtime.migration] Running upgrade c3a73f615e4 -> 659bf3d90664
INFO [alembic.runtime.migration] Running upgrade 659bf3d90664 -> 1df244e556f5
INFO [alembic.runtime.migration] Running upgrade 1df244e556f5 -> 19f26505c74f
INFO [alembic.runtime.migration] Running upgrade 19f26505c74f -> 15be73214821
INFO [alembic.runtime.migration] Running upgrade 15be73214821 -> b4caf27aae4
INFO [alembic.runtime.migration] Running upgrade b4caf27aae4 -> 15e43b934f81
INFO [alembic.runtime.migration] Running upgrade 15e43b934f81 -> 31ed664953e6
INFO [alembic.runtime.migration] Running upgrade 31ed664953e6 -> 2f9e956e7532
INFO [alembic.runtime.migration] Running upgrade 2f9e956e7532 -> 3894bccad37f
INFO [alembic.runtime.migration] Running upgrade 3894bccad37f -> 0e66c5227a8a
INFO [alembic.runtime.migration] Running upgrade 0e66c5227a8a -> 45f8dd33480b
INFO [alembic.runtime.migration] Running upgrade 45f8dd33480b -> 5abc0278ca73
INFO [alembic.runtime.migration] Running upgrade 5abc0278ca73 -> d3435b514502
INFO [alembic.runtime.migration] Running upgrade d3435b514502 -> 30107ab6a3ee
INFO [alembic.runtime.migration] Running upgrade 30107ab6a3ee -> c415aab1c048
INFO [alembic.runtime.migration] Running upgrade c415aab1c048 -> a963b38d82f4
INFO [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99
INFO [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada
INFO [alembic.runtime.migration] Running upgrade 4ffceebfada -> 5498d17be016
INFO [alembic.runtime.migration] Running upgrade 5498d17be016 -> 2a16083502f3
INFO [alembic.runtime.migration] Running upgrade 2a16083502f3 -> 2e5352a0ad4d
INFO [alembic.runtime.migration] Running upgrade 2e5352a0ad4d -> 11926bcfe72d
INFO [alembic.runtime.migration] Running upgrade 11926bcfe72d -> 4af11ca47297
INFO [alembic.runtime.migration] Running upgrade 4af11ca47297 -> 1b294093239c
INFO [alembic.runtime.migration] Running upgrade 1b294093239c -> 8a6d8bdae39
INFO [alembic.runtime.migration] Running upgrade 8a6d8bdae39 -> 2b4c2465d44b
INFO [alembic.runtime.migration] Running upgrade 2b4c2465d44b -> e3278ee65050
INFO [alembic.runtime.migration] Running upgrade e3278ee65050 -> c6c112992c9
INFO [alembic.runtime.migration] Running upgrade c6c112992c9 -> 5ffceebfada
INFO [alembic.runtime.migration] Running upgrade 5ffceebfada -> 4ffceebfcdc
INFO [alembic.runtime.migration] Running upgrade 4ffceebfcdc -> 7bbb25278f53
INFO [alembic.runtime.migration] Running upgrade 7bbb25278f53 -> 89ab9a816d70
INFO [alembic.runtime.migration] Running upgrade 89ab9a816d70 -> c879c5e1ee90
INFO [alembic.runtime.migration] Running upgrade c879c5e1ee90 -> 8fd3918ef6f4
INFO [alembic.runtime.migration] Running upgrade 8fd3918ef6f4 -> 4bcd4df1f426
INFO [alembic.runtime.migration] Running upgrade 4bcd4df1f426 -> b67e765a3524
INFO [alembic.runtime.migration] Running upgrade a963b38d82f4 -> 3d0e74aa7d37
INFO [alembic.runtime.migration] Running upgrade 3d0e74aa7d37 -> 030a959ceafa
INFO [alembic.runtime.migration] Running upgrade 030a959ceafa -> a5648cfeeadf
INFO [alembic.runtime.migration] Running upgrade a5648cfeeadf -> 0f5bef0f87d4
INFO [alembic.runtime.migration] Running upgrade 0f5bef0f87d4 -> 67daae611b6e
INFO [alembic.runtime.migration] Running upgrade 67daae611b6e -> 6b461a21bcfc
INFO [alembic.runtime.migration] Running upgrade 6b461a21bcfc -> 5cd92597d11d
INFO [alembic.runtime.migration] Running upgrade 5cd92597d11d -> 929c968efe70
INFO [alembic.runtime.migration] Running upgrade 929c968efe70 -> a9c43481023c
INFO [alembic.runtime.migration] Running upgrade a9c43481023c -> 804a3c76314c
INFO [alembic.runtime.migration] Running upgrade 804a3c76314c -> 2b42d90729da
INFO [alembic.runtime.migration] Running upgrade 2b42d90729da -> 62c781cb6192
INFO [alembic.runtime.migration] Running upgrade 62c781cb6192 -> c8c222d42aa9
INFO [alembic.runtime.migration] Running upgrade c8c222d42aa9 -> 349b6fd605a6
INFO [alembic.runtime.migration] Running upgrade 349b6fd605a6 -> 7d32f979895f
INFO [alembic.runtime.migration] Running upgrade 7d32f979895f -> 594422d373ee
INFO [alembic.runtime.migration] Running upgrade 594422d373ee -> 61663558142c
INFO [alembic.runtime.migration] Running upgrade 61663558142c -> 867d39095bf4, port forwarding
INFO [alembic.runtime.migration] Running upgrade b67e765a3524 -> a84ccf28f06a
INFO [alembic.runtime.migration] Running upgrade a84ccf28f06a -> 7d9d8eeec6ad
INFO [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab
INFO [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0
INFO [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62
INFO [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353
INFO [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586
INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d
确定
[root@openstack01 tools]#

ok

10)重启nova_api服务
systemctl restart openstack-nova-api.service
11)启动neutron服务并设置开机启动

需要启动4个服务

systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl status neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl list-unit-files |grep neutron* |grep enabled

如果是使用的方式2安装的neutron还需要执行以下命令(本教程暂略)

systemctl enable neutron-l3-agent.service

systemctl start neutron-l3-agent.service

实例演示:

[root@openstack01 tools]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
[root@openstack01 tools]# systemctl status neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
● neutron-server.service - OpenStack Neutron Server
Loaded: loaded (/usr/lib/systemd/system/neutron-server.service; disabled; vendor preset: disabled)
Active: active (running) since 一 2018-10-29 21:37:59 CST; 5s ago
Main PID: 2231 (neutron-server)
CGroup: /system.slice/neutron-server.service
├─2231 /usr/bin/python2 /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron…
├─2317 /usr/bin/python2 /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron…
├─2318 /usr/bin/python2 /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron…
├─2319 /usr/bin/python2 /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron…
└─2320 /usr/bin/python2 /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron…

10月 29 21:36:42 openstack01.zuiyoujie.com systemd[1]: Starting OpenStack Neutron Server…
10月 29 21:37:59 openstack01.zuiyoujie.com systemd[1]: Started OpenStack Neutron Server.

● neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent
Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; disabled; vendor preset: disabled)
Active: active (running) since 一 2018-10-29 21:36:43 CST; 1min 21s ago
Process: 2232 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
Main PID: 2248 (neutron-linuxbr)
CGroup: /system.slice/neutron-linuxbridge-agent.service
├─2248 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neu…
├─2301 sudo neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-f…
├─2304 /usr/bin/python2 /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/ne…
└─2309 /usr/bin/python2 /bin/privsep-helper --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml…

10月 29 21:36:42 openstack01.zuiyoujie.com systemd[1]: Starting OpenStack Neutron Linux Bridge Agent…
10月 29 21:36:43 openstack01.zuiyoujie.com neutron-enable-bridge-firewall.sh[2232]: net.bridge.bridge-nf-call-iptables = 1
10月 29 21:36:43 openstack01.zuiyoujie.com systemd[1]: Started OpenStack Neutron Linux Bridge Agent.
10月 29 21:37:31 openstack01.zuiyoujie.com sudo[2301]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper --config-f…

● neutron-dhcp-agent.service - OpenStack Neutron DHCP Agent
Loaded: loaded (/usr/lib/systemd/system/neutron-dhcp-agent.service; disabled; vendor preset: disabled)
Active: active (running) since 一 2018-10-29 21:36:42 CST; 1min 22s ago
Main PID: 2233 (neutron-dhcp-ag)
CGroup: /system.slice/neutron-dhcp-agent.service
└─2233 /usr/bin/python2 /usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dh…

10月 29 21:36:42 openstack01.zuiyoujie.com systemd[1]: Started OpenStack Neutron DHCP Agent.
10月 29 21:36:42 openstack01.zuiyoujie.com systemd[1]: Starting OpenStack Neutron DHCP Agent…

● neutron-metadata-agent.service - OpenStack Neutron Metadata Agent
Loaded: loaded (/usr/lib/systemd/system/neutron-metadata-agent.service; disabled; vendor preset: disabled)
Active: active (running) since 一 2018-10-29 21:36:42 CST; 1min 22s ago
Main PID: 2234 (neutron-metadat)
CGroup: /system.slice/neutron-metadata-agent.service
└─2234 /usr/bin/python2 /usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutro…

10月 29 21:36:42 openstack01.zuiyoujie.com systemd[1]: Started OpenStack Neutron Metadata Agent.
10月 29 21:36:42 openstack01.zuiyoujie.com systemd[1]: Starting OpenStack Neutron Metadata Agent…
Hint: Some lines were ellipsized, use -l to show in full.
[root@openstack01 tools]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service.
[root@openstack01 tools]# systemctl list-unit-files |grep neutron* |grep enabled
neutron-dhcp-agent.service enabled
neutron-linuxbridge-agent.service enabled
neutron-metadata-agent.service enabled
neutron-server.service enabled

至此,控制端的neutron网络服务就安装完成,之后需要在计算节点安装网络服务组件,使计算节点可以连接到openstack集群

回到顶部
6.4.在计算节点安装neutron网络组件

Install and configure compute node

https://docs.openstack.org/neutron/rocky/install/compute-install-rdo.html
1)安装neutron组件
yum install openstack-neutron-linuxbridge ebtables ipset -y
2)快速配置/etc/neutron/neutron.conf

openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:openstack@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password neutron
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

查看生效的配置

egrep -v ‘($|#)’ /etc/neutron/neutron.conf

[root@openstack02 ~]# egrep -v ‘($|#)’ /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
[agent]
[cors]
[database]
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[quotas]
[ssl]

3)快速配置/etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:ens33
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan false
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

注意:第一个选项physical_interface_mappings选项要配置计算节点自身的网卡名称provider:ens33

查看生效的配置

egrep -v ‘($|#)’ /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[root@openstack02 ~]# egrep -v ‘($|#)’ /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings = provider:ens33
[network_log]
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan = false

4)配置nova计算服务与neutron网络服务协同工作

快速配置/etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password neutron

查看生效的配置

egrep -v ‘($|#)’ /etc/nova/nova.conf

[root@openstack02 ~]# egrep -v ‘($|#)’ /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@controller
my_ip = 192.168.1.82
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
log_date_format=%Y-%m-%d %H:%M:%S
log_file=nova-compute.log
log_dir=/var/log/nova
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers = http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[libvirt]
virt_type = qemu
[matchmaker_redis]
[metrics]
[mks]
[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement
[placement_database]
[powervm]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = 192.168.1.82
novncproxy_base_url = http://controller:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]

5)重启计算节点
systemctl restart openstack-nova-compute.service
systemctl status openstack-nova-compute.service
6)启动neutron网络组件,并配置开机自启动

需要启动1个服务,网桥代理

systemctl restart neutron-linuxbridge-agent.service
systemctl status neutron-linuxbridge-agent.service

systemctl enable neutron-linuxbridge-agent.service
systemctl list-unit-files |grep neutron* |grep enabled

实例演示:

[root@openstack02 ~]# systemctl restart neutron-linuxbridge-agent.service
[root@openstack02 ~]# systemctl status neutron-linuxbridge-agent.service
● neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent
Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; disabled; vendor preset: disabled)
Active: active (running) since 一 2018-10-29 21:57:32 CST; 44ms ago
Process: 3076 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
Main PID: 3083 (neutron-linuxbr)
Tasks: 1
CGroup: /system.slice/neutron-linuxbridge-agent.service
└─3083 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neu…

10月 29 21:57:32 openstack02.zuiyoujie.com systemd[1]: Starting OpenStack Neutron Linux Bridge Agent…
10月 29 21:57:32 openstack02.zuiyoujie.com neutron-enable-bridge-firewall.sh[3076]: net.bridge.bridge-nf-call-iptables = 1
10月 29 21:57:32 openstack02.zuiyoujie.com neutron-enable-bridge-firewall.sh[3076]: net.bridge.bridge-nf-call-ip6tables = 1
10月 29 21:57:32 openstack02.zuiyoujie.com systemd[1]: Started OpenStack Neutron Linux Bridge Agent.
[root@openstack02 ~]# systemctl enable neutron-linuxbridge-agent.service
Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-linuxbridge-agent.service to /usr/lib/systemd/system/neutron-linuxbridge-agent.service.
[root@openstack02 ~]# systemctl list-unit-files |grep neutron* |grep enabled
neutron-linuxbridge-agent.service enabled

至此,计算节点的网络配置完成,转回到控制节点进行验证操作

回到顶部
6.5.在控制节点检查确认neutron服务安装成功

Verify operation

https://docs.openstack.org/neutron/rocky/install/verify.html

以下命令在控制节点执行

1)获取管理权限
cd /server/tools
source keystone-admin-pass.sh
2)列表查看加载的网络插件
openstack extension list --network

实例演示:

[root@openstack01 tools]# openstack extension list --network
±----------------------------------------------------------------------------------------------------------------------------------------±-------------------------------±---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Name | Alias | Description |
±----------------------------------------------------------------------------------------------------------------------------------------±-------------------------------±---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Default Subnetpools | default-subnetpools | Provides ability to mark and use a subnetpool as the default. |
| Network IP Availability | network-ip-availability | Provides IP availability data for each network and subnet. |
| Network Availability Zone | network_availability_zone | Availability zone support for network. |
| Network MTU (writable) | net-mtu-writable | Provides a writable MTU attribute for a network resource. |
| Port Binding | binding | Expose port bindings of a virtual port to external application |
| agent | agent | The agent management extension. |
| Subnet Allocation | subnet_allocation | Enables allocation of subnets from a subnet pool |
| DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among dhcp agents |
| Neutron external network | external-net | Adds external network attribute to network resource. |
| Neutron Service Flavors | flavors | Flavor specification for Neutron advanced services. |
| Network MTU | net-mtu | Provides MTU attribute for a network resource. |
| Availability Zone | availability_zone | The availability zone extension. |
| Quota management support | quotas | Expose functions for quotas management per tenant |
| Tag support for resources with standard attribute: subnet, trunk, router, network, policy, subnetpool, port, security_group, floatingip | standard-attr-tag | Enables to set tag on resources with standard attribute. |
| Availability Zone Filter Extension | availability_zone_filter | Add filter parameters to AvailabilityZone resource |
| If-Match constraints based on revision_number | revision-if-match | Extension indicating that If-Match based on revision_number is supported. |
| Filter parameters validation | filter-validation | Provides validation on filter parameters. |
| Multi Provider Network | multi-provider | Expose mapping of virtual networks to multiple physical networks |
| Quota details management support | quota_details | Expose functions for quotas usage statistics per project |
| Address scope | address-scope | Address scopes extension. |
| Empty String Filtering Extension | empty-string-filtering | Allow filtering by attributes with empty string value |
| Subnet service types | subnet-service-types | Provides ability to set the subnet service_types field |
| Neutron Port MAC address regenerate | port-mac-address-regenerate | Network port MAC address regenerate |
| Resource timestamps | standard-attr-timestamp | Adds created_at and updated_at fields to all Neutron resources that have Neutron standard attributes. |
| Provider Network | provider | Expose mapping of virtual networks to physical networks |
| Neutron Service Type Management | service-type | API for retrieving service providers for Neutron advanced services |
| Neutron Extra DHCP options | extra_dhcp_opt | Extra options configuration for DHCP. For example PXE boot options to DHCP clients can be specified (e.g. tftp-server, server-ip-address, bootfile-name) |
| Port filtering on security groups | port-security-groups-filtering | Provides security groups filtering when listing ports |
| Resource revision numbers | standard-attr-revisions | This extension will display the revision number of neutron resources. |
| Pagination support | pagination | Extension that indicates that pagination is enabled. |
| Sorting support | sorting | Extension that indicates that sorting is enabled. |
| security-group | security-group | The security groups extension. |
| RBAC Policies | rbac-policies | Allows creation and modification of policies that control tenant access to resources. |
| standard-attr-description | standard-attr-description | Extension to add descriptions to standard attributes |
| IP address substring filtering | ip-substring-filtering | Provides IP address substring filtering when listing ports |
| Port Security | port-security | Provides port security |
| Allowed Address Pairs | allowed-address-pairs | Provides allowed address pairs |
| project_id field enabled | project-id | Extension that indicates that project_id field is enabled. |
| Port Bindings Extended | binding-extended | Expose port bindings of a virtual port to external application |
±----------------------------------------------------------------------------------------------------------------------------------------±-------------------------------±---------------------------------------------------------------------------------------------------------------------------------------------------------+

或者使用另一种方法:显示简版信息

[root@openstack01 tools]# neutron ext-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
±-------------------------------±----------------------------------------------------------------------------------------------------------------------------------------+
| alias | name |
±-------------------------------±----------------------------------------------------------------------------------------------------------------------------------------+
| default-subnetpools | Default Subnetpools |
| network-ip-availability | Network IP Availability |
| network_availability_zone | Network Availability Zone |
| net-mtu-writable | Network MTU (writable) |
| binding | Port Binding |
| agent | agent |
| subnet_allocation | Subnet Allocation |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| external-net | Neutron external network |
| flavors | Neutron Service Flavors |
| net-mtu | Network MTU |
| availability_zone | Availability Zone |
| quotas | Quota management support |
| standard-attr-tag | Tag support for resources with standard attribute: subnet, trunk, router, network, policy, subnetpool, port, security_group, floatingip |
| availability_zone_filter | Availability Zone Filter Extension |
| revision-if-match | If-Match constraints based on revision_number |
| filter-validation | Filter parameters validation |
| multi-provider | Multi Provider Network |
| quota_details | Quota details management support |
| address-scope | Address scope |
| empty-string-filtering | Empty String Filtering Extension |
| subnet-service-types | Subnet service types |
| port-mac-address-regenerate | Neutron Port MAC address regenerate |
| standard-attr-timestamp | Resource timestamps |
| provider | Provider Network |
| service-type | Neutron Service Type Management |
| extra_dhcp_opt | Neutron Extra DHCP options |
| port-security-groups-filtering | Port filtering on security groups |
| standard-attr-revisions | Resource revision numbers |
| pagination | Pagination support |
| sorting | Sorting support |
| security-group | security-group |
| rbac-policies | RBAC Policies |
| standard-attr-description | standard-attr-description |
| ip-substring-filtering | IP address substring filtering |
| port-security | Port Security |
| allowed-address-pairs | Allowed Address Pairs |
| project-id | project_id field enabled |
| binding-extended | Port Bindings Extended |
±-------------------------------±----------------------------------------------------------------------------------------------------------------------------------------+

3)查看网络代理列表
openstack network agent list

实例演示:

[root@openstack01 tools]# openstack network agent list
±-------------------------------------±-------------------±--------------------------±------------------±------±------±--------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
±-------------------------------------±-------------------±--------------------------±------------------±------±------±--------------------------+
| 53c6db96-25a5-4f38-aa3c-d5abdd3ad66a | DHCP agent | openstack01.zuiyoujie.com | nova | 😃 | UP | neutron-dhcp-agent |
| 5c9509c5-71dd-42a6-b682-0ba1b9d24c12 | Linux bridge agent | openstack01.zuiyoujie.com | None | 😃 | UP | neutron-linuxbridge-agent |
| bdd41869-cf75-447b-8857-f3e133f08883 | Linux bridge agent | openstack02.zuiyoujie.com | None | 😃 | UP | neutron-linuxbridge-agent |
| e9935776-ca0b-4422-a5bc-350e285a0a24 | Metadata agent | openstack01.zuiyoujie.com | None | 😃 | UP | neutron-metadata-agent |
±-------------------------------------±-------------------±--------------------------±------------------±------±------±--------------------------+

正常情况下:控制节点有3个服务,计算节点有1个服务,如果不是,需要检查计算节点配置:网卡名称,IP地址,端口,密码等要素

CentOS7安装OpenStack(Rocky版)-07.安装horizon服务组件(控制节点dashboard)
阅读目录
• 7.0.horizon(dashboard)概述
o 系统环境要求:
• 7.1.安装dashboard WEB控制台
o 1)安装dashboard软件包
o 2)修改配置文件/etc/openstack-dashboard/local_settings
o 3)修改/etc/httpd/conf.d/openstack-dashboard.conf
o 4)重启web服务器以及会话存储服务
o 5)检查dashboard是否可用
o 6)其他可选的dashboard配置


在上一篇文章分享了neutron网络服务的安装配置,本文分享openstack的horizon(dashboard)web界面管理服务,方便在浏览器操作
---------------------- 完美的分割线 ------------------------
回到顶部
7.0.horizon(dashboard)概述

mitaka中文版文档

https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/neutron-controller-install.html

rocky版-用户引导页

https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-rocky

rocky版horizon(dashboard)安装文档

https://docs.openstack.org/horizon/rocky/install/

系统环境要求:
1)Python2.7或者3.5以上
2)Django1.11或者2.0以上(Django 1.8 to 1.10 are no longer supported since Rocky release.)
3)可用的keystone节点服务
4)以下服务可以选择:
cinder: Block Storage
glance: Image Management
neutron: Networking
nova: Compute
swift: Object Storage
Horizon also supports many other OpenStack services via plugins. For more information, see the Plugin Registry.
回到顶部
7.1.安装dashboard WEB控制台
1)安装dashboard软件包
yum install openstack-dashboard -y
2)修改配置文件/etc/openstack-dashboard/local_settings

检查确认有以下配置

vim /etc/openstack-dashboard/local_settings

ALLOWED_HOSTS = [’*’, ]
SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’
OPENSTACK_API_VERSIONS = {
“identity”: 3,
“image”: 2,
“volume”: 2,
}
OPENSTACK_HOST = “controller”
OPENSTACK_KEYSTONE_URL = “http://%s:5000/v3” % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user”
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = “default”
CACHES = {
‘default’: {
‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
‘LOCATION’: ‘controller:11211’,
}
}
OPENSTACK_NEUTRON_NETWORK = {
‘enable_router’: False,
‘enable_quotas’: False,
‘enable_distributed_router’: False,
‘enable_ha_router’: False,
‘enable_fip_topology_check’: False,
‘enable_lb’: False,
‘enable_firewall’: False,
‘enable_vpn’: False,
}
TIME_ZONE = “Asia/Shanghai”

配置选项:

配置dashboard运行在192.168.1.81上(192.168.1.81为OS主机名)
配置允许登陆dashboard的主机
配置memcached存储服务
启用第3版认证API
启用对域的支持
配置API版本
通过仪表盘创建用户时的默认域配置为 default
通过仪表盘创建的用户默认角色配置为 user
如果您选择网络参数1,禁用支持3层网络服务:
可以选择性地配置时区,不能用CST否则无法启动httpd服务

3)修改/etc/httpd/conf.d/openstack-dashboard.conf

增加以下内容

vim /etc/httpd/conf.d/openstack-dashboard.conf

WSGIApplicationGroup %{GLOBAL}

4)重启web服务器以及会话存储服务
systemctl restart httpd.service memcached.service
systemctl status httpd.service memcached.service
5)检查dashboard是否可用

在浏览器中输入下面的地址:用户名和密码都是admin,域名用default

http://controller:80/dashboard

浏览器页面如下:

登录后显示这样一个摘要信息:

新安装的默认里面基本是空的,需要后续手动添加扩充

6)其他可选的dashboard配置
https://docs.openstack.org/horizon/rocky/install/next-steps.html

CentOS7安装OpenStack(Rocky版)-08.启动一个虚拟机实例
阅读目录
• 8.0.neutron的两种虚拟网络
o 1)Provider network(提供者网络)
o 2)Self-service network(自服务网络)
• 8.1.创建provider提供者网络
o 1)在控制节点上,创建网络接口
o 2)检查网络配置
o 3)创建provider子网
• 8.3.创建私有网络


安装完openstack的必要组件keystone,nova,glance,neutron以后就可以使用openstack命令创建一台云虚拟机了
------------------- 完美的分割线 --------------------
回到顶部
8.0.neutron的两种虚拟网络
对于实际的网络环境,购买好路由器交换机等网络设备后,还需要连接网线,配置局域网络才可以让服务器正常连接上网。
同样,对于openstack,安装好neutron只相当于购买好了网络设备,仍然需要创建一个虚拟的网络才可以让虚拟机在里面运行。
对于openstack的虚拟网络,在安装neutron时只进行了简单的叙述,有两种模式:
1)Provider network(提供者网络)

网络结构图参考:https://docs.openstack.org/install-guide/launch-instance-networks-provider.html

简单理解就是与现有物理网络桥接起来的网络,网络结构图如下,

在这种网络中,集群中的各个节点通过物理网络连接,节点内部通过L2(provider网桥/交换机)与物理网络进行连接,这个网络可以包括为实例提供IP地址的DHCP服务器。
集群中的实例(虚拟机)通过Provider网络为其分配映射的tap端口与桥接网卡传输数据从而进行内外部通信,类似kvm虚拟机采用桥接模式使得网络结构,网络结构示意图如下:

ok

2)Self-service network(自服务网络)

网络结构图参考:https://docs.openstack.org/install-guide/launch-instance-networks-selfservice.html

类似阿里云的内部私有网络,可以让使用者自己构建一个内部使用对外隔离的网络,结构如下图:

是在provider网络上的扩展,通过self-service网桥使用vxlan技术创建一个独立的网络,这个独立的网络也可以通过vxlan tunnels连接到物理网络进行数据传输
网络连接拓扑图如下:

ok

回到顶部
8.1.创建provider提供者网络
1)在控制节点上,创建网络接口

加载 admin 凭证来获取管理员能执行的命令访问权限

cd /server/tools/
source keystone-admin-pass.sh
openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider
openstack network list

实例演示:

[root@openstack01 tools]# openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider
±--------------------------±-------------------------------------+
| Field | Value |
±--------------------------±-------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2018-11-06T06:34:01Z |
| description | |
| dns_domain | None |
| id | 25346d04-0f1f-4277-b896-ba3f01425d86 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | None |
| is_vlan_transparent | None |
| mtu | 1500 |
| name | provider |
| port_security_enabled | True |
| project_id | 3706708374804e2eb4ed056f55d84666 |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | 0 |
| router:external | External |
| segments | None |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | 2018-11-06T06:34:01Z |
±--------------------------±-------------------------------------+
[root@openstack01 tools]# openstack network list
±-------------------------------------±---------±--------+
| ID | Name | Subnets |
±-------------------------------------±---------±--------+
| 25346d04-0f1f-4277-b896-ba3f01425d86 | provider | |
±-------------------------------------±---------±--------+

附:旧版的命令(在这个版本中会少创建一些东西),可以参考

neutron net-create --shared --provider:physical_network [自定义的物理网卡的名称] --provider:network_type flat(单一扁平网络) [创建的虚拟网络名称]
neutron net-create --shared --provider:physical_network provider --provider:network_type flat provider
2)检查网络配置

确认ml2_conf.ini以下配置选项

上面的命令–provider-network-type flat网络名称provider与此对应

vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2_type_flat]
flat_networks = provider

确认linuxbridge_agent.ini以下配置选项

上面的命令–provider-physical-network provider于此对应,网卡注意要于此对应,控制节点的网卡名称

vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:eno16777736

3)创建provider子网
openstack subnet create --network provider --no-dhcp --allocation-pool start=192.168.1.210,end=192.168.1.220 --dns-nameserver 4.4.4.4 --gateway 192.168.1.1 --subnet-range 192.168.1.0/24 provider-subnet01
openstack subnet create --network provider --dhcp --subnet-range 192.168.2.0/24 provider-subnet02
openstack subnet list

实例演示:

[root@openstack01 tools]# openstack subnet create --network provider --no-dhcp --allocation-pool start=192.168.1.210,end=192.168.1.220 --dns-nameserver 4.4.4.4 --gateway 192.168.1.1 --subnet-range 192.168.1.0/24 provider-subnet01
±------------------±-------------------------------------+
| Field | Value |
±------------------±-------------------------------------+
| allocation_pools | 192.168.1.210-192.168.1.220 |
| cidr | 192.168.1.0/24 |
| created_at | 2018-11-12T12:48:08Z |
| description | |
| dns_nameservers | 4.4.4.4 |
| enable_dhcp | False |
| gateway_ip | 192.168.1.1 |
| host_routes | |
| id | 2aaf50aa-ab80-4ed5-99c8-58d4d4d31ff3 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | provider-subnet01 |
| network_id | 25346d04-0f1f-4277-b896-ba3f01425d86 |
| project_id | 3706708374804e2eb4ed056f55d84666 |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2018-11-12T12:48:08Z |
±------------------±-------------------------------------+
[root@openstack01 tools]# openstack subnet create --network provider --dhcp --subnet-range 192.168.2.0/24 provider-subnet02
±------------------±-------------------------------------+
| Field | Value |
±------------------±-------------------------------------+
| allocation_pools | 192.168.2.2-192.168.2.254 |
| cidr | 192.168.2.0/24 |
| created_at | 2018-11-12T12:48:13Z |
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 192.168.2.1 |
| host_routes | |
| id | 0d21b823-ae0c-4c3e-87e6-22e3b2d794c4 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | provider-subnet02 |
| network_id | 25346d04-0f1f-4277-b896-ba3f01425d86 |
| project_id | 3706708374804e2eb4ed056f55d84666 |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2018-11-12T12:48:13Z |
±------------------±-------------------------------------+
[root@openstack01 tools]# openstack subnet list
±-------------------------------------±------------------±-------------------------------------±---------------+
| ID | Name | Network | Subnet |
±-------------------------------------±------------------±-------------------------------------±---------------+
| 0d21b823-ae0c-4c3e-87e6-22e3b2d794c4 | provider-subnet02 | 25346d04-0f1f-4277-b896-ba3f01425d86 | 192.168.2.0/24 |
| 2aaf50aa-ab80-4ed5-99c8-58d4d4d31ff3 | provider-subnet01 | 25346d04-0f1f-4277-b896-ba3f01425d86 | 192.168.1.0/24 |
±-------------------------------------±------------------±-------------------------------------±---------------+

至此,provider网络创建完成,可以创建虚拟机

8.3.在

回到顶部
8.3.创建私有网络
#Create the self-service network
https://docs.openstack.org/install-guide/launch-instance-networks-selfservice.html
1)创建私有网络接口

CentOS7安装OpenStack(Rocky版)-09.安装Cinder存储服务组件(控制节点)
阅读目录
• 9.0.Cinder概述
• 9.1.在控制节点安装cinder存储服务
o 1)创建cinder数据库
o 2)在keystone上面注册cinder服务(创建服务证书)
o 3)安装cinder相关软件包
o 4)快速修改cinder配置
o 5)同步cinder数据库
o 6)修改nova配置文件
o 7)重启nova-api服务
o 8)启动cinder存储服务
• 9.2.在存储节点服务器安装cinder存储服务
o 1)安装LVM相关软件包
o 2)启动LVM的metadata服务并配置开机自启动
o 3)创建LVM逻辑卷
o 4)配置过滤器,防止系统出错
o 5)在存储节点安装配置cinder组件
o 6)在存储节点快速修改cinder配置
o 7)在存储节点启动cinder服务并配置开机自启动
• 9.3.在控制节点进行验证
o 1)获取管理员变量
o 2)查看存储卷列表
• 9.4.cinder云磁盘使用建议


本文分享openstack的Cinder存储服务组件,cinder服务可以提供云磁盘(卷),类似阿里云云盘
----------------------- 完美的分隔线 -----------------------------

openstack-Mitaka-cinder块存储服务中文文档

https://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/cinder.html

openstack-rocky版本Cinder官方安装文档

https://docs.openstack.org/cinder/rocky/install/

回到顶部
9.0.Cinder概述
OpenStack块存储服务(cinder)为虚拟机添加持久的存储,块存储提供一个基础设施为了管理卷,以及和OpenStack计算服务交互,为实例提供卷。此服务也会激活管理卷的快照和卷类型的功能。
块存储服务通常包含下列组件:
1)cinder-api
接受API请求,并将其路由到cinder-volume执行。
2)cinder-volume
与块存储服务和例如cinder-scheduler的进程进行直接交互。它也可以与这些进程通过一个消息队列进行交互。cinder-volume服务响应送到块存储服务的读写请求来维持状态。它也可以和多种存储提供者在驱动架构下进行交互。
3)cinder-scheduler守护进程
选择最优存储提供节点来创建卷。其与nova-scheduler组件类似。
4)cinder-backup守护进程
cinder-backup服务提供任何种类备份卷到一个备份存储提供者。就像cinder-volume服务,它与多种存储提供者在驱动架构下进行交互。
5)消息队列
在块存储的进程之间路由信息。
回到顶部
9.1.在控制节点安装cinder存储服务

Install and configure controller node

https://docs.openstack.org/cinder/rocky/install/cinder-controller-install-rdo.html
1)创建cinder数据库

创建相关数据库,授权访问用户

mysql -u root -p123456

CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@‘localhost’ IDENTIFIED BY ‘cinder’;
GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder’@’%’ IDENTIFIED BY ‘cinder’;
flush privileges;
show databases;
select user,host from mysql.user;
exit

2)在keystone上面注册cinder服务(创建服务证书)

在keystone上创建cinder用户

cd /server/tools
source keystone-admin-pass.sh
openstack user create --domain default --password=cinder cinder
openstack user list

在keystone上将cinder用户配置为admin角色并添加进service项目,以下命令无输出

openstack role add --project service --user cinder admin

创建cinder服务的实体

openstack service create --name cinderv2 --description “OpenStack Block Storage” volumev2
openstack service create --name cinderv3 --description “OpenStack Block Storage” volumev3
openstack service list

创建cinder服务的API端点(endpoint)

openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%(project_id)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%(project_id)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%(project_id)s

openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%(project_id)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%(project_id)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%(project_id)s
openstack endpoint list

实例演示:

[root@openstack01 tools]# openstack user create --domain default --password=cinder cinder
±--------------------±---------------------------------+
| Field | Value |
±--------------------±---------------------------------+
| domain_id | default |
| enabled | True |
| id | a1a276d12c4e442ebc9250e4d4148166 |
| name | cinder |
| options | {} |
| password_expires_at | None |
±--------------------±---------------------------------+
[root@openstack01 tools]# openstack user list
±---------------------------------±----------+
| ID | Name |
±---------------------------------±----------+
| 26f88ba142d04735936d09caa7c76284 | placement |
| 82a27e65ca644a5eadcd54ff44e5e05b | glance |
| a1a276d12c4e442ebc9250e4d4148166 | cinder |
| cbb2b3830a8f44bc837230bca27ae563 | myuser |
| cc55913a3da44a38939cdc7a2ec764cc | nova |
| dd35b7396aa94342a01c807aaa707d21 | neutron |
| e5dbfc8b394c41679fd5ce229cdd6ed3 | admin |
±---------------------------------±----------+
[root@openstack01 tools]# openstack role add --project service --user cinder admin
[root@openstack01 tools]# openstack service create --name cinderv2 --description “OpenStack Block Storage” volumev2
±------------±---------------------------------+
| Field | Value |
±------------±---------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 5342850f7fd04f999ab6c6f787baa610 |
| name | cinderv2 |
| type | volumev2 |
±------------±---------------------------------+
[root@openstack01 tools]# openstack service create --name cinderv3 --description “OpenStack Block Storage” volumev3
±------------±---------------------------------+
| Field | Value |
±------------±---------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | cba2b834789f49f5a9fdac76c09c5fae |
| name | cinderv3 |
| type | volumev3 |
±------------±---------------------------------+
[root@openstack01 tools]# openstack service list
±---------------------------------±----------±----------+
| ID | Name | Type |
±---------------------------------±----------±----------+
| 5342850f7fd04f999ab6c6f787baa610 | cinderv2 | volumev2 |
| 63c882889b204d81a9867f9b7c0ba7aa | keystone | identity |
| 6c31f22e259b460fa0168ac206265c30 | glance | image |
| 854ca66666c64e2fbeff1e9c5cc1c4df | nova | compute |
| 90b5d791df5e4634848c00ba35390865 | neutron | network |
| a79d818312b34c4c8879d7dbbd41a78c | placement | placement |
| cba2b834789f49f5a9fdac76c09c5fae | cinderv3 | volumev3 |
±---------------------------------±----------±----------+
[root@openstack01 tools]# openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%(project_id)s
±-------------±-----------------------------------------+
| Field | Value |
±-------------±-----------------------------------------+
| enabled | True |
| id | 1412aab234bf4793bbb55bf938dfabe9 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5342850f7fd04f999ab6c6f787baa610 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
±-------------±-----------------------------------------+
[root@openstack01 tools]# openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%(project_id)s
±-------------±-----------------------------------------+
| Field | Value |
±-------------±-----------------------------------------+
| enabled | True |
| id | 5421883053d84778b222ed24b424ad71 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5342850f7fd04f999ab6c6f787baa610 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
±-------------±-----------------------------------------+
[root@openstack01 tools]# openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%(project_id)s
±-------------±-----------------------------------------+
| Field | Value |
±-------------±-----------------------------------------+
| enabled | True |
| id | 4947b9f1a61f4e5c858e1a2d6dd426eb |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 5342850f7fd04f999ab6c6f787baa610 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://controller:8776/v2/%(project_id)s |
±-------------±-----------------------------------------+
[root@openstack01 tools]# openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%(project_id)s
±-------------±-----------------------------------------+
| Field | Value |
±-------------±-----------------------------------------+
| enabled | True |
| id | 594078e79fd44a8383a9dba42931ff06 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | cba2b834789f49f5a9fdac76c09c5fae |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
±-------------±-----------------------------------------+
[root@openstack01 tools]# openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%(project_id)s
±-------------±-----------------------------------------+
| Field | Value |
±-------------±-----------------------------------------+
| enabled | True |
| id | 67735ee3f61d48aea3dc3338d67a1ca8 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | cba2b834789f49f5a9fdac76c09c5fae |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
±-------------±-----------------------------------------+
[root@openstack01 tools]# openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%(project_id)s
±-------------±-----------------------------------------+
| Field | Value |
±-------------±-----------------------------------------+
| enabled | True |
| id | fed47d708ea2407bb2a986a4796719b2 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | cba2b834789f49f5a9fdac76c09c5fae |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://controller:8776/v3/%(project_id)s |
±-------------±-----------------------------------------+
[root@openstack01 tools]# openstack endpoint list
±---------------------------------±----------±-------------±-------------±--------±----------±-----------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
±---------------------------------±----------±-------------±-------------±--------±----------±-----------------------------------------+
| 022711a6476648bda1446ecb7668f315 | RegionOne | placement | placement | True | public | http://controller:8778 |
| 1291aa2f71104ce69f9b05905fbc2c8a | RegionOne | placement | placement | True | admin | http://controller:8778 |
| 1412aab234bf4793bbb55bf938dfabe9 | RegionOne | cinderv2 | volumev2 | True | public | http://controller:8776/v2/%(project_id)s |
| 1cba9e89dc91422390a5b987dbeffdb6 | RegionOne | neutron | network | True | internal | http://controller:9696 |
| 2bcda9f77cdb4c06be6f35a3c3312e3d | RegionOne | neutron | network | True | admin | http://controller:9696 |
| 3f293d128470468683d5f82a66301232 | RegionOne | placement | placement | True | internal | http://controller:8778 |
| 43960ef2a79a45d49bfd22a2dbf4c2ce | RegionOne | nova | compute | True | internal | http://controller:8774/v2.1 |
| 4947b9f1a61f4e5c858e1a2d6dd426eb | RegionOne | cinderv2 | volumev2 | True | admin | http://controller:8776/v2/%(project_id)s |
| 5421883053d84778b222ed24b424ad71 | RegionOne | cinderv2 | volumev2 | True | internal | http://controller:8776/v2/%(project_id)s |
| 594078e79fd44a8383a9dba42931ff06 | RegionOne | cinderv3 | volumev3 | True | public | http://controller:8776/v3/%(project_id)s |
| 67735ee3f61d48aea3dc3338d67a1ca8 | RegionOne | cinderv3 | volumev3 | True | internal | http://controller:8776/v3/%(project_id)s |
| 7129fffdb2614227aca641b10635efdf | RegionOne | nova | compute | True | admin | http://controller:8774/v2.1 |
| 7226f8f9c7164214b815821b77ae3ce6 | RegionOne | glance | image | True | admin | http://controller:9292 |
| 756084d018c948039d2ae55b13fc7d4a | RegionOne | glance | image | True | internal | http://controller:9292 |
| 7f0461c745b340ef83372059782d22ee | RegionOne | nova | compute | True | public | http://controller:8774/v2.1 |
| b8dabe6c548e435eb2b1f7efe3b23236 | RegionOne | keystone | identity | True | admin | http://controller:5000/v3/ |
| eb72eb6ea51842feb67ba5849beea48c | RegionOne | keystone | identity | True | internal | http://controller:5000/v3/ |
| ed17939d7623456bb203bb7197fc16c4 | RegionOne | neutron | network | True | public | http://controller:9696 |
| f13c44af4e8d45d5b0229ea870f2c24f | RegionOne | glance | image | True | public | http://controller:9292 |
| f172f6159ad34fbd8e10e0d42828d8cd | RegionOne | keystone | identity | True | public | http://controller:5000/v3/ |
| fed47d708ea2407bb2a986a4796719b2 | RegionOne | cinderv3 | volumev3 | True | admin | http://controller:8776/v3/%(project_id)s |
±---------------------------------±----------±-------------±-------------±--------±----------±-----------------------------------------+

ok

3)安装cinder相关软件包
yum install openstack-cinder -y
4)快速修改cinder配置

openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:cinder@controller/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:openstack@controller
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.1.81
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/nova/tmp

检查生效的cinder配置

egrep -v “#|$” /etc/cinder/cinder.conf
grep ‘[5]’ /etc/cinder/cinder.conf

实例演示:


[root@openstack01 tools]# egrep -v “#|$” /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 192.168.1.81
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:cinder@controller/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[sample_remote_file_source]
[service_user]
[ssl]
[vault]

[root@openstack01 tools]# grep ‘[6]’ /etc/cinder/cinder.conf
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 192.168.1.81
connection = mysql+pymysql://cinder:cinder@controller/cinder
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
lock_path = /var/lib/nova/tmp

ok

5)同步cinder数据库

有35张表

su -s /bin/sh -c “cinder-manage db sync” cinder

验证数据库

mysql -h192.168.1.81 -ucinder -pcinder -e “use cinder;show tables;”

实例演示:

[root@openstack01 tools]# mysql -h192.168.1.81 -ucinder -pcinder -e “use cinder;show tables;”
±---------------------------+
| Tables_in_cinder |
±---------------------------+
| attachment_specs |
| backup_metadata |
| backups |
| cgsnapshots |
| clusters |
| consistencygroups |
| driver_initiator_data |
| encryption |
| group_snapshots |
| group_type_projects |
| group_type_specs |
| group_types |
| group_volume_type_mapping |
| groups |
| image_volume_cache_entries |
| messages |
| migrate_version |
| quality_of_service_specs |
| quota_classes |
| quota_usages |
| quotas |
| reservations |
| services |
| snapshot_metadata |
| snapshots |
| transfers |
| volume_admin_metadata |
| volume_attachment |
| volume_glance_metadata |
| volume_metadata |
| volume_type_extra_specs |
| volume_type_projects |
| volume_types |
| volumes |
| workers |
±---------------------------+

ok

6)修改nova配置文件

配置nova调用cinder服务

openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne

检查生效的nova配置

grep ‘[7]’ /etc/nova/nova.conf |grep os_region_name
7)重启nova-api服务
systemctl restart openstack-nova-api.service
systemctl status openstack-nova-api.service
8)启动cinder存储服务

需要启动2个服务

systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl list-unit-files |grep openstack-cinder |grep enabled

实例演示:

[root@openstack01 tools]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
[root@openstack01 tools]# systemctl status openstack-cinder-api.service openstack-cinder-scheduler.service
● openstack-cinder-api.service - OpenStack Cinder API Server
Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-api.service; disabled; vendor preset: disabled)
Active: active (running) since 二 2018-10-30 16:01:27 CST; 600ms ago
Main PID: 19104 (cinder-api)
CGroup: /system.slice/openstack-cinder-api.service
└─19104 /usr/bin/python2 /usr/bin/cinder-api --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /var/log/cinde…

10月 30 16:01:27 openstack01.zuiyoujie.com systemd[1]: Started OpenStack Cinder API Server.
10月 30 16:01:27 openstack01.zuiyoujie.com systemd[1]: Starting OpenStack Cinder API Server…

● openstack-cinder-scheduler.service - OpenStack Cinder Scheduler Server
Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-scheduler.service; disabled; vendor preset: disabled)
Active: active (running) since 二 2018-10-30 16:01:27 CST; 700ms ago
Main PID: 19105 (cinder-schedule)
CGroup: /system.slice/openstack-cinder-scheduler.service
└─19105 /usr/bin/python2 /usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /var/log…

10月 30 16:01:27 openstack01.zuiyoujie.com systemd[1]: Started OpenStack Cinder Scheduler Server.
10月 30 16:01:27 openstack01.zuiyoujie.com systemd[1]: Starting OpenStack Cinder Scheduler Server…
[root@openstack01 tools]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-api.service to /usr/lib/systemd/system/openstack-cinder-api.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-scheduler.service to /usr/lib/systemd/system/openstack-cinder-scheduler.service.
[root@openstack01 tools]# systemctl list-unit-files |grep openstack-cinder |grep enabled
openstack-cinder-api.service enabled
openstack-cinder-scheduler.service enabled

至此,控制端的cinder服务安装完毕,在dashboard上面可以看到项目目录中多了一个卷服务

接下来安装块存储节点服务器storage node

回到顶部
9.2.在存储节点服务器安装cinder存储服务

存储节点建议单独部署服务器(最好是物理机),测试时也可以部署在控制节点或者计算节点

在本文,存储节点使用LVM逻辑卷提供服务,需要提供一块空的磁盘用以创建LVM逻辑卷

我这里在VMware虚拟机增加一块100GB的磁盘

1)安装LVM相关软件包
yum install lvm2 device-mapper-persistent-data -y
2)启动LVM的metadata服务并配置开机自启动
systemctl start lvm2-lvmetad.service
systemctl status lvm2-lvmetad.service

systemctl enable lvm2-lvmetad.service
systemctl list-unit-files |grep lvm2-lvmetad |grep enabled
3)创建LVM逻辑卷

检查磁盘状态

fdisk -l

创建LVM 物理卷 /dev/sdb

pvcreate /dev/sdb

创建 LVM 卷组 cinder-volumes,块存储服务会在这个卷组中创建逻辑卷

vgcreate cinder-volumes /dev/sdb

实例演示:

[root@openstack02 ~]# fdisk -l

磁盘 /dev/sda:536.9 GB, 536870912000 字节,1048576000 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节
磁盘标签类型:dos
磁盘标识符:0x0003970d

设备 Boot Start End Blocks Id System
/dev/sda1 * 2048 1026047 512000 83 Linux
/dev/sda2 1026048 1044381695 521677824 83 Linux
/dev/sda3 1044381696 1048575999 2097152 82 Linux swap / Solaris

磁盘 /dev/sdb:107.4 GB, 107374182400 字节,209715200 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节

[root@openstack02 ~]# pvcreate /dev/sdb
Physical volume “/dev/sdb” successfully created.
[root@openstack02 ~]# vgcreate cinder-volumes /dev/sdb
Volume group “cinder-volumes” successfully created

ok

4)配置过滤器,防止系统出错

默认只会有openstack实例访问块存储卷组,不过,底层的操作系统也会管理这些设备并尝试将逻辑卷与系统关联。

默认情况下LVM卷扫描工具会扫描整个/dev目录,查找所有包含lvm卷的块存储设备。如果其他项目在某个磁盘设备sda,sdc等上使用了lvm卷,那么扫描工具检测到这些卷时会尝试缓存这些lvm卷,可能导致底层操作系统或者其他服务无法正常调用他们的lvm卷组,从而产生各种问题,需要手动配置LVM,让LVM卷扫描工具只扫描包含"cinder-volume"卷组的设备/dev/sdb,我这边磁盘分区都是格式化的手工分区,目前不存在这个问题,以下是配置演示

vim /etc/lvm/lvm.conf

devices {
filter = [ “a/sdb/”, “r/.*/”]
}

配置规则:

每个过滤器组中的元素都以a开头accept接受,或以 r 开头reject拒绝,后面连接设备名称的正则表达式规则。

过滤器组必须以"r/.*/"结束,过滤所有保留设备。

可以使用命令:vgs -vvvv来测试过滤器。

注意:

如果存储节点的操作系统磁盘/dev/sda使用的是LVM卷组,也需要将该设备添加到过滤器中,例如:

filter = [ “a/sda/”, “a/sdb/”, “r/.*/”]

如果计算节点的操作系统磁盘/dev/sda使用的是LVM卷组,也需要修改这些节点的/etc/lvm/lvm.conf,在过滤器中增加该类型的磁盘设备,例如:

filter = [ “a/sda/”, “r/.*/”]
5)在存储节点安装配置cinder组件
yum install openstack-cinder targetcli python-keystone -y
6)在存储节点快速修改cinder配置

openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:cinder@controller/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:openstack@controller
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 192.168.1.82
openstack-config --set /etc/cinder/cinder.conf lvm volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver
openstack-config --set /etc/cinder/cinder.conf lvm volume_group cinder-volumes
openstack-config --set /etc/cinder/cinder.conf lvm iscsi_protocol iscsi
openstack-config --set /etc/cinder/cinder.conf lvm iscsi_helper lioadm
openstack-config --set /etc/cinder/cinder.conf DEFAULT enabled_backends lvm
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_api_servers http://controller:9292
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp

如果存储节点是双网卡,选项my_ip需要配置存储节点的管理IP,否则配置本机IP

检查生效的cinder配置

egrep -v “#|$” /etc/cinder/cinder.conf
grep ‘[8]’ /etc/cinder/cinder.conf

实例演示:

[root@openstack02 ~]# egrep -v “#|$” /etc/cinder/cinder.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 192.168.1.82
enabled_backends = lvm
glance_api_servers = http://controller:9292
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:cinder@controller/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[sample_remote_file_source]
[service_user]
[ssl]
[vault]
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm

[root@openstack02 ~]# grep ‘[9]’ /etc/cinder/cinder.conf
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 192.168.1.82
enabled_backends = lvm
glance_api_servers = http://controller:9292
connection = mysql+pymysql://cinder:cinder@controller/cinder
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder
lock_path = /var/lib/cinder/tmp
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm

7)在存储节点启动cinder服务并配置开机自启动

需要启动2个服务

systemctl start openstack-cinder-volume.service target.service
systemctl status openstack-cinder-volume.service target.service

systemctl enable openstack-cinder-volume.service target.service
systemctl list-unit-files |grep openstack-cinder |grep enabled
systemctl list-unit-files |grep target.service |grep enabled

实例演示:

[root@openstack02 ~]# systemctl start openstack-cinder-volume.service target.service
[root@openstack02 ~]# systemctl status openstack-cinder-volume.service target.service
● openstack-cinder-volume.service - OpenStack Cinder Volume Server
Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; disabled; vendor preset: disabled)
Active: active (running) since 二 2018-10-30 18:23:10 CST; 668ms ago
Main PID: 2075 (cinder-volume)
Tasks: 1
CGroup: /system.slice/openstack-cinder-volume.service
└─2075 /usr/bin/python2 /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log

10月 30 18:23:10 openstack02.zuiyoujie.com systemd[1]: Started OpenStack Cinder Volume Server.
10月 30 18:23:10 openstack02.zuiyoujie.com systemd[1]: Starting OpenStack Cinder Volume Server…

● target.service - Restore LIO kernel target configuration
Loaded: loaded (/usr/lib/systemd/system/target.service; disabled; vendor preset: disabled)
Active: active (exited) since 二 2018-10-30 18:23:11 CST; 49ms ago
Process: 2076 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)
Main PID: 2076 (code=exited, status=0/SUCCESS)

10月 30 18:23:10 openstack02.zuiyoujie.com systemd[1]: Starting Restore LIO kernel target configuration…
10月 30 18:23:11 openstack02.zuiyoujie.com target[2076]: No saved config file at /etc/target/saveconfig.json, ok, exiting
10月 30 18:23:11 openstack02.zuiyoujie.com systemd[1]: Started Restore LIO kernel target configuration.
[root@openstack02 ~]# systemctl enable openstack-cinder-volume.service target.service
Created symlink from /etc/systemd/system/multi-user.target.wants/openstack-cinder-volume.service to /usr/lib/systemd/system/openstack-cinder-volume.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service.
[root@openstack02 ~]# systemctl list-unit-files |grep openstack-cinder |grep enabled
openstack-cinder-volume.service enabled
[root@openstack02 ~]# systemctl list-unit-files |grep target.service |grep enabled
target.service enabled

至此,在存储节点安装cinder服务就完成了

回到顶部
9.3.在控制节点进行验证
1)获取管理员变量
cd /server/tools/
source keystone-admin-pass.sh
2)查看存储卷列表
openstack volume service list

实例演示:

[root@openstack01 tools]# openstack volume service list
±-----------------±------------------------------±-----±--------±------±---------------------------+
| Binary | Host | Zone | Status | State | Updated At |
±-----------------±------------------------------±-----±--------±------±---------------------------+
| cinder-scheduler | openstack01.zuiyoujie.com | nova | enabled | up | 2018-10-31T10:55:19.000000 |
| cinder-volume | openstack02.zuiyoujie.com@lvm | nova | enabled | up | 2018-10-31T10:55:21.000000 |
±-----------------±------------------------------±-----±--------±------±---------------------------+

返回以上信息,表示cinder相关节点安装完成

回到顶部
9.4.cinder云磁盘使用建议
1)云磁盘可以进行磁盘迁移,扩容,缩容等操作,但不建议在生产环境进行尝试,测试环境可以尝试,但也要注意备份数据
2)涉及重要数据的话尽量还是不使用云磁盘,而采用本地磁盘存储数据,如果出现问题至少各个磁盘数据是分开的,磁盘文件还在
3)总的来说,使用openstack部署企业私有云,生产环境用本地磁盘就好,测试环境可以以尝试使用云磁盘


  1. a-z ↩︎

  2. a-z ↩︎

  3. a-z ↩︎

  4. a-z ↩︎

  5. a-z ↩︎

  6. a-z ↩︎

  7. a-z ↩︎

  8. a-z ↩︎

  9. a-z ↩︎

「喜欢这篇文章,您的关注和赞赏是给作者最好的鼓励」
关注作者
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文章的来源(墨天轮),文章链接,文章作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论