Part 2: Integrate OpenStack with Ceph Storage Cluster

|
Last Updated:
|
|
integrate openstack with ceph storage cluster

This is the part 2 of our tutorial on how to integrate OpenStack with Ceph Storage cluster. In cloud computing, OpenStack and Ceph stand as two prominent pillars, each offering distinct yet complementary capabilities. While OpenStack provides a comprehensive cloud computing platform, Ceph delivers distributed and scalable storage services.

Part 1: Integrating OpenStack with Ceph Storage Cluster

Part 3: Integrate OpenStack with Ceph Storage Cluster

Prepare Nodes for OpenStack Deployment and Integration with Ceph

Note that, we are deploying OpenStack using Kolla-ansible.

In our previous guides we discussed how to deploy multinode OpenStack as well how to expand OpenStack deployment by adding more controller nodes.

Deploy Multinode OpenStack using Kolla-Ansible

Add Controller Nodes into Existing OpenStack Cluster using Kolla-Ansible

We will pivot on these tutorials and just make a few changes to integrate our fresh OpenStack installation with Ceph storage cluster.

Note: in our previous guide, we were using an LVM disk and NFS share provided by our storage node (storage01) for OpenStack Cinder volume and backup, and for the Glance images respectively.

To integrate OpenStack with Ceph, we will eliminate the need for storage node (storage01) since we already have a running Ceph cluster.

Thus, follow the OpenStack installation guide above accordingly and only make changes where necessary as outlined below.

Configure Networking on the OpenStack Nodes

Network Configurations on the Nodes

Install Required Packages on the Kolla-Ansible Deployment Node

Install Required Packages on Deployment Node

Create Kolla-Ansible Deployment User Account on OpenStack Nodes

Create Kolla-Ansible Deployment User Account on OpenStack Nodes

Create a Virtual Environment for Kolla-Ansible Deployment

Create a Virtual Environment for Deploying Kolla-ansible

Install Ansible on Deployment Node

Install Ansible on Deployment Node

Install Kolla-Ansible on the Deployment Node

Install Kolla-ansible on Deployment Node

Configure Kolla-ansible for Multinode OpenStack Deployment

Create Kolla configuration directory;

sudo mkdir /etc/kolla

Update the ownership of the Kolla config directory to the user with which you activated Koll-ansible deployment virtual environment as

sudo chown $USER:$USER /etc/kolla

Copy the main Kolla configuration file, globals.yml and the OpenStack services passwords file, passwords.yml into the Kolla configuration directory above from the virtual environment.

cp $HOME/kolla-ansible/share/kolla-ansible/etc_examples/kolla/* /etc/kolla/

Confirm;

ls /etc/kolla
globals.yml  passwords.yml

Create Ceph Configuration Directories on OpenStack

You need to create a directory to store OpenStack Ceph client configurations. OpenStack ceph clients in this context are the OpenStack nodes running glance_apicinder_volumenova_compute and cinder_backup.

Since we are using our controller node as our Kolla-ansible deployment node, we will create configuration directories for Glance, Nova and Cinder volume/backup to store Ceph configurations as follows.

kifarunix@controller01:~$ mkdir -p /etc/kolla/config/{glance,cinder/cinder-volume,cinder/cinder-backup,nova}

Copy Ceph Configurations to OpenStack Client Directories

Copy the Ceph configuration file, ceph.conf from the Ceph admin node to each of the OpenStack services directories created above.

192.168.200.200 is my controller01 node.

Glance:

ssh [email protected] tee /etc/kolla/config/glance/ceph.conf < /etc/ceph/ceph.conf

Cinder Volume and Backup

ssh [email protected] tee /etc/kolla/config/cinder/cinder-volume/ceph.conf < /etc/ceph/ceph.conf
ssh [email protected] tee /etc/kolla/config/cinder/cinder-backup/ceph.conf < /etc/ceph/ceph.conf

Nova

ssh [email protected] tee /etc/kolla/config/nova/ceph.conf < /etc/ceph/ceph.conf

Login to your Kolla-ansible deployment node and confirm the above;

kifarunix@controller01:~$ cat /etc/kolla/config/glance/ceph.conf
# minimal ceph.conf for 1e266088-9480-11ee-a7e1-738d8527cddc
[global]
fsid = 1e266088-9480-11ee-a7e1-738d8527cddc
mon_host = [v2:192.168.200.108:3300/0,v1:192.168.200.108:6789/0] [v2:192.168.200.109:3300/0,v1:192.168.200.109:6789/0] [v2:192.168.200.110:3300/0,v1:192.168.200.110:6789/0]

Confirm for cinder-backup and cinder-volume as well.

Create Ceph Credentials for OpenStack Clients

As already mentioned, our OpenStack Ceph client in this context is our OpenStack glance/cinder services.

Any client that is accessing Ceph cluster needs to authenticate itself on Ceph in order to read, write or manage specific object data in the cluster. To achieve this, Ceph uses CephX protocol. CephX authorizes users/clients and daemons to perform specific actions within a Ceph cluster. It enforces access control rules that determine which users and daemons are allowed to read, write, or manage data, as well as perform other administrative tasks.

Cephx authorization is primarily based on the concept of capabilities (commonly abbreviated as caps). caps are used to describe the permissions granted to an authenticated user to exercise the functionality of the monitors, OSDs, and metadata servers. They can also restrict access to data within a pool, a namespace within a pool, or a set of pools based on their application tags which are permissions granted to users and daemons to perform specific operations.

The Cephx authentication is enabled by default. It uses shared secret keys for authentication, meaning both the client and Ceph Monitors have a copy of the client’s secret key.

There are two common naming conventions for Ceph keyring files:

  1. User keyrings: For user keyrings, the file name typically follows the format ceph.client.username.keyring, where username is the name of the Ceph user. For example, the keyring file for the client.admin user would be named ceph.client.admin.keyring.
  2. Daemon keyrings: For daemon keyrings, the file name typically follows the format ceph.service.keyring, where service is the name of the Ceph daemon. For example, the keyring file for the ceph-osd daemon would be named ceph.osd.keyring.

When you deploy Ceph using tools such as cephadm, you will see a keyring file for the admin user, ceph.client.admin.keyring is created under /etc/ceph/ directory. Authentication keys and capabilities for Ceph users and daemons are stored in a keyring file.

For example, if you run the ceph health command without specifying a user name or keyring, Ceph interprets the command like this:

ceph -n client.admin --keyring=/etc/ceph/ceph.client.admin.keyring health

You can list Ceph authentication state (all users in the cluster) using the command below;

sudo ceph auth ls
osd.0
	key: AQDH7HBlXtGxChAA7ToLWEBb+E5rMXy2AHFR7Q==
	caps: [mgr] allow profile osd
	caps: [mon] allow profile osd
	caps: [osd] allow *
osd.1
	key: AQDL7HBl4rD5FBAAYn5E8aT3dX4Evj84IpgSYA==
	caps: [mgr] allow profile osd
	caps: [mon] allow profile osd
	caps: [osd] allow *
osd.2
	key: AQDL7HBlEbhUHRAA10a3R6MI+gZkhwOO/b/bWA==
	caps: [mgr] allow profile osd
	caps: [mon] allow profile osd
	caps: [osd] allow *
osd.3
	key: AQDM7HBlT7NaBRAApBwCaAA7KP7kIL9Sa3UlDQ==
	caps: [mgr] allow profile osd
	caps: [mon] allow profile osd
	caps: [osd] allow *
osd.4
	key: AQDU7HBlc6ZBCBAAfc+X7ud86ED+reyxJQ/4hw==
	caps: [mgr] allow profile osd
	caps: [mon] allow profile osd
	caps: [osd] allow *
osd.5
	key: AQDU7HBlMj2PORAA7XMnB0E9vMLzM/vhdYlUew==
	caps: [mgr] allow profile osd
	caps: [mon] allow profile osd
	caps: [osd] allow *
client.admin
	key: AQB06nBlLRqaARAAd6foD4m7LCacr35VkwbB8A==
	caps: [mds] allow *
	caps: [mgr] allow *
	caps: [mon] allow *
	caps: [osd] allow *
client.bootstrap-mds
	key: AQB26nBlveK/KxAAALdKzVeoRpTIPMWdZG+0ZA==
	caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
	key: AQB26nBliOi/KxAAoOBnL/3ZDlemCJb1EW/txA==
	caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
	key: AQB26nBlWu2/KxAAbxUUdBFxuldf0GjHR1lBIw==
	caps: [mon] allow profile bootstrap-osd
client.bootstrap-rbd
	key: AQB26nBlJfK/KxAA76mYZYJmpj0tN6s0K2eOLw==
	caps: [mon] allow profile bootstrap-rbd
client.bootstrap-rbd-mirror
	key: AQB26nBl9fa/KxAA3mG63hMRfdeAH0rj+Y4JRg==
	caps: [mon] allow profile bootstrap-rbd-mirror
client.bootstrap-rgw
	key: AQB26nBl5/y/KxAAOtN2KnJgCh9HkiSSZB6c9g==
	caps: [mon] allow profile bootstrap-rgw
client.ceph-exporter.ceph-mon-osd01
	key: AQCo6nBlJXysAxAAFzvIyRTjJZFePGxt6SECyg==
	caps: [mgr] allow r
	caps: [mon] allow r
	caps: [osd] allow r
client.ceph-exporter.osd02
	key: AQBC7HBlKtjvBRAA1gN4CDBJT2YCMRW7k9F3HQ==
	caps: [mgr] allow r
	caps: [mon] allow r
	caps: [osd] allow r
client.ceph-exporter.osd03
	key: AQBn7HBlqpWgNxAACz49/HzFw9iPAu+pm78ncg==
	caps: [mgr] allow r
	caps: [mon] allow r
	caps: [osd] allow r
client.crash.ceph-mon-osd01
	key: AQCp6nBlqjT6MhAAlimlctchb6CwhgTFX7wBvA==
	caps: [mgr] profile crash
	caps: [mon] profile crash
client.crash.osd02
	key: AQBE7HBlXe9vDBAADWjCfeElTOMpG5/9Qs/jMw==
	caps: [mgr] profile crash
	caps: [mon] profile crash
client.crash.osd03
	key: AQBp7HBl5UGWJhAAEVAsSTGn4LzpnDTxes06LA==
	caps: [mgr] profile crash
	caps: [mon] profile crash
mgr.ceph-mon-osd01.nubjcu
	key: AQB06nBlQXVMFRAALNjTMBTZi5cL4bDYLpHxCg==
	caps: [mds] allow *
	caps: [mon] profile mgr
	caps: [osd] allow *
mgr.osd02.zfvugc
	key: AQBG7HBlGKgVHRAAo1MRpjTN8k9vQeuk4oMcyA==
	caps: [mds] allow *
	caps: [mon] profile mgr
	caps: [osd] allow *

You can get a specific user keys and capabilities as well. For example, to check for admin user;

sudo ceph auth get client.admin
[client.admin]
	key = AQB06nBlLRqaARAAd6foD4m7LCacr35VkwbB8A==
	caps mds = "allow *"
	caps mgr = "allow *"
	caps mon = "allow *"
	caps osd = "allow *"

Without further ado, create OpenStack Glance/Cinder Ceph credentials;

ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=<pool>' mgr 'profile rbd pool=<pool>'

Replace the <pool> with your pool name. e.g;

Glance Ceph credentials:

sudo ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=glance-images'

Cinder volume and backup credentials;

sudo ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=cinder-volume, allow rx pool=glance-images'
sudo ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=cinder-backup'

The keys will be printed to standard output as well.

So, what do the commands do exactly?

  • ceph: The command-line interface for interacting with the Ceph storage cluster.
  • auth: The auth subsystem in Ceph is responsible for managing authentication and authorization.
  • get-or-create: This part of the command is telling Ceph to either retrieve existing authentication information for the specified client (“client.glance”) or create it if it doesn’t exist.
  • client.glance: This is the name of the Ceph client for which the authentication information is being created or retrieved. In this case, it’s named “glance.”
  • mon 'allow r': Specifies the permissions for the Monitors (mon) in the Ceph cluster. It grants read-only (‘allow r’) permissions to the monitors.
  • osd 'allow class-read object_prefix rbd_children, allow rwx pool=glance-images': Specifies the permissions for the Object Storage Daemons (OSD) in the Ceph cluster. It grants the following permissions:
    • allow class-read object_prefix rbd_children: Allows the client to read the class and list the child objects under the “rbd” namespace.
    • allow rwx pool=glance-images: Grants read, write, and execute permissions for the “glance-images” pool.

You can check the details using;

sudo ceph auth ls
osd.0
	key: AQDH7HBlXtGxChAA7ToLWEBb+E5rMXy2AHFR7Q==
	caps: [mgr] allow profile osd
	caps: [mon] allow profile osd
	caps: [osd] allow *
osd.1
	key: AQDL7HBl4rD5FBAAYn5E8aT3dX4Evj84IpgSYA==
	caps: [mgr] allow profile osd
	caps: [mon] allow profile osd
	caps: [osd] allow *
osd.2
	key: AQDL7HBlEbhUHRAA10a3R6MI+gZkhwOO/b/bWA==
	caps: [mgr] allow profile osd
	caps: [mon] allow profile osd
	caps: [osd] allow *
osd.3
	key: AQDM7HBlT7NaBRAApBwCaAA7KP7kIL9Sa3UlDQ==
	caps: [mgr] allow profile osd
	caps: [mon] allow profile osd
	caps: [osd] allow *
osd.4
	key: AQDU7HBlc6ZBCBAAfc+X7ud86ED+reyxJQ/4hw==
	caps: [mgr] allow profile osd
	caps: [mon] allow profile osd
	caps: [osd] allow *
osd.5
	key: AQDU7HBlMj2PORAA7XMnB0E9vMLzM/vhdYlUew==
	caps: [mgr] allow profile osd
	caps: [mon] allow profile osd
	caps: [osd] allow *
client.admin
	key: AQB06nBlLRqaARAAd6foD4m7LCacr35VkwbB8A==
	caps: [mds] allow *
	caps: [mgr] allow *
	caps: [mon] allow *
	caps: [osd] allow *
client.bootstrap-mds
	key: AQB26nBlveK/KxAAALdKzVeoRpTIPMWdZG+0ZA==
	caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
	key: AQB26nBliOi/KxAAoOBnL/3ZDlemCJb1EW/txA==
	caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
	key: AQB26nBlWu2/KxAAbxUUdBFxuldf0GjHR1lBIw==
	caps: [mon] allow profile bootstrap-osd
client.bootstrap-rbd
	key: AQB26nBlJfK/KxAA76mYZYJmpj0tN6s0K2eOLw==
	caps: [mon] allow profile bootstrap-rbd
client.bootstrap-rbd-mirror
	key: AQB26nBl9fa/KxAA3mG63hMRfdeAH0rj+Y4JRg==
	caps: [mon] allow profile bootstrap-rbd-mirror
client.bootstrap-rgw
	key: AQB26nBl5/y/KxAAOtN2KnJgCh9HkiSSZB6c9g==
	caps: [mon] allow profile bootstrap-rgw
client.ceph-exporter.ceph-mon-osd01
	key: AQCo6nBlJXysAxAAFzvIyRTjJZFePGxt6SECyg==
	caps: [mgr] allow r
	caps: [mon] allow r
	caps: [osd] allow r
client.ceph-exporter.osd02
	key: AQBC7HBlKtjvBRAA1gN4CDBJT2YCMRW7k9F3HQ==
	caps: [mgr] allow r
	caps: [mon] allow r
	caps: [osd] allow r
client.ceph-exporter.osd03
	key: AQBn7HBlqpWgNxAACz49/HzFw9iPAu+pm78ncg==
	caps: [mgr] allow r
	caps: [mon] allow r
	caps: [osd] allow r
client.cinder
	key: AQCgp3JlbaE2GBAAiG1Tpcjm/DyTXmYClvxTYQ==
	caps: [mon] allow r
	caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=cinder-volume, allow rx pool=glance-images
client.cinder-backup
	key: AQCgp3JliNIvLhAA7rgiP5zxA0wFIYCqFjoHvg==
	caps: [mon] allow r
	caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=cinder-backup
client.crash.ceph-mon-osd01
	key: AQCp6nBlqjT6MhAAlimlctchb6CwhgTFX7wBvA==
	caps: [mgr] profile crash
	caps: [mon] profile crash
client.crash.osd02
	key: AQBE7HBlXe9vDBAADWjCfeElTOMpG5/9Qs/jMw==
	caps: [mgr] profile crash
	caps: [mon] profile crash
client.crash.osd03
	key: AQBp7HBl5UGWJhAAEVAsSTGn4LzpnDTxes06LA==
	caps: [mgr] profile crash
	caps: [mon] profile crash
client.glance
	key: AQChp3JlGbZvBxAA6nI65tVe3Yi+jJCUP86FwQ==
	caps: [mon] allow r
	caps: [osd] allow class-read object_prefix rbd_children, allow rwx pool=glance-images
mgr.ceph-mon-osd01.nubjcu
	key: AQB06nBlQXVMFRAALNjTMBTZi5cL4bDYLpHxCg==
	caps: [mds] allow *
	caps: [mon] profile mgr
	caps: [osd] allow *
mgr.osd02.zfvugc
	key: AQBG7HBlGKgVHRAAo1MRpjTN8k9vQeuk4oMcyA==
	caps: [mds] allow *
	caps: [mon] profile mgr
	caps: [osd] allow *

If you want to print the key only;

sudo ceph auth print-key TYPE.ID

Where TYPE is either clientosdmon, or mds, and ID is the user name or the ID of the daemon.

You can also use the command, ceph auth get <username>.

sudo ceph auth get client.glance

If for some reasons you want to delete a user and associated caps, use  the command ceph auth del TYPE.ID.

For example to delete client.cinder;

sudo ceph auth del client.cinder

Copy Ceph Credentials to OpenStack Clients

Once you have the credentials for the OpenStack services generated, copy them from the Ceph admin node to the client.

Be sure to remove the leading tabs on the configuration files.

In our case, we have already created the Ceph configuration files directory on our Kolla-ansible control node, which is controller01. So copying the OpenStack client/service keys to the node itself is as easy as running the commands below.

Glance:

sudo ceph auth get-or-create client.glance | ssh [email protected] tee /etc/kolla/config/glance/ceph.client.glance.keyring

Cinder Volume and Backup:

sudo ceph auth get-or-create client.cinder | ssh [email protected] tee /etc/kolla/config/cinder/cinder-volume/ceph.client.cinder.keyring
sudo ceph auth get-or-create client.cinder-backup | ssh [email protected] tee /etc/kolla/config/cinder/cinder-backup/ceph.client.cinder-backup.keyring

cinder-backup requires two keyrings for accessing volumes and backup pool. Hence, copy the cinder-volume keyring into cinder-backup configuration directory.

sudo ceph auth get-or-create client.cinder | ssh [email protected] tee /etc/kolla/config/cinder/cinder-backup/ceph.client.cinder.keyring

Nova:

If you will be booting OpenStack instances using volumes that are stored or needs to be stored on Ceph, then Nova must be configured to access Cinder volume pool on Ceph.

Thus, copy both the Glance and Cinder volume keyrings to Nova configuration directory.

sudo ceph auth get-or-create client.glance | ssh [email protected] tee /etc/kolla/config/nova/ceph.client.glance.keyring
sudo ceph auth get-or-create client.cinder | ssh [email protected] tee /etc/kolla/config/nova/ceph.client.cinder.keyring

Confirm on the client;

kifarunix@controller01:~$ cat /etc/kolla/config/glance/ceph.client.glance.keyring
[client.glance]
key = AQChp3JlGbZvBxAA6nI65tVe3Yi+jJCUP86FwQ==
cat /etc/kolla/config/cinder/cinder-volume/ceph.client.cinder.keyring
[client.cinder]
key = AQCgp3JlbaE2GBAAiG1Tpcjm/DyTXmYClvxTYQ==
cat /etc/kolla/config/cinder/cinder-backup/ceph.client.cinder-backup.keyring
[client.cinder-backup]
key = AQCgp3JliNIvLhAA7rgiP5zxA0wFIYCqFjoHvg==
cat /etc/kolla/config/nova/ceph.client.cinder.keyring /etc/kolla/config/nova/ceph.client.glance.keyring
[client.cinder]
key = AQCgp3JlbaE2GBAAiG1Tpcjm/DyTXmYClvxTYQ==
[client.glance]
key = AQChp3JlGbZvBxAA6nI65tVe3Yi+jJCUP86FwQ==

Enable OpenStack Services Cephx Authentication

Next, update the Ceph OpenStack services configuration files to enable Cephx authentication.

kifarunix@controller01:~$ cat /etc/kolla/config/glance/ceph.conf
# minimal ceph.conf for 1e266088-9480-11ee-a7e1-738d8527cddc
[global]
fsid = 1e266088-9480-11ee-a7e1-738d8527cddc
mon_host = [v2:192.168.200.108:3300/0,v1:192.168.200.108:6789/0] [v2:192.168.200.109:3300/0,v1:192.168.200.109:6789/0] [v2:192.168.200.110:3300/0,v1:192.168.200.110:6789/0]

So, you need to update the configuration file to define the path to the keyring. The keyring will be installed under /etc/ceph/ceph.client.glance.keyring.

Similarly, enable Cephx authentication by adding the lines;

auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

Note that the Ceph generated configuration files have leading tabs. These tabs break Kolla Ansible’s ini parser. Be sure to remove the leading tabs from your ceph.conf files when copying them in the following sections.

See our updated config for Glance (leading tabs removed);

Glance:

kifarunix@controller01:~$ cat /etc/kolla/config/glance/ceph.conf
# minimal ceph.conf for 1e266088-9480-11ee-a7e1-738d8527cddc
[global]
fsid = 1e266088-9480-11ee-a7e1-738d8527cddc
mon_host = [v2:192.168.200.108:3300/0,v1:192.168.200.108:6789/0] [v2:192.168.200.109:3300/0,v1:192.168.200.109:6789/0] [v2:192.168.200.110:3300/0,v1:192.168.200.110:6789/0]
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
[client.glance]
keyring = /etc/ceph/ceph.client.glance.keyring

Cinder Volume:

kifarunix@controller01:~$ cat /etc/kolla/config/cinder/cinder-volume/ceph.conf
# minimal ceph.conf for 1e266088-9480-11ee-a7e1-738d8527cddc
[global]
fsid = 1e266088-9480-11ee-a7e1-738d8527cddc
mon_host = [v2:192.168.200.108:3300/0,v1:192.168.200.108:6789/0] [v2:192.168.200.109:3300/0,v1:192.168.200.109:6789/0] [v2:192.168.200.110:3300/0,v1:192.168.200.110:6789/0]
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

Cinder Backup:

kifarunix@controller01:~$ cat /etc/kolla/config/cinder/cinder-backup/ceph.conf
# minimal ceph.conf for 1e266088-9480-11ee-a7e1-738d8527cddc
[global]
fsid = 1e266088-9480-11ee-a7e1-738d8527cddc
mon_host = [v2:192.168.200.108:3300/0,v1:192.168.200.108:6789/0] [v2:192.168.200.109:3300/0,v1:192.168.200.109:6789/0] [v2:192.168.200.110:3300/0,v1:192.168.200.110:6789/0]
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

Nova:

cat /etc/kolla/config/nova/ceph.conf
# minimal ceph.conf for 1e266088-9480-11ee-a7e1-738d8527cddc
[global]
fsid = 1e266088-9480-11ee-a7e1-738d8527cddc
mon_host = [v2:192.168.200.108:3300/0,v1:192.168.200.108:6789/0] [v2:192.168.200.109:3300/0,v1:192.168.200.109:6789/0] [v2:192.168.200.110:3300/0,v1:192.168.200.110:6789/0]
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

Configure OpenStack Glance Client for Ceph

Create OpenStack Glance API configuration file to update some default values.

vim /etc/kolla/config/glance/glance-api.conf
[DEFAULT]
show_image_direct_url = True
debug = True

[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = glance-images
rbd_store_user = glance
rbd_store_chunk_size = 8
rbd_store_ceph_conf = /etc/ceph/ceph.conf

Save and exit the file.

Read more about this on Glance API configuration file options.

Install Ceph RBD Secret Key on Compute Nodes

Ceph Block Device images can be attached to OpenStack instances through libvirt, (nova_libvirt container in the case of Kolla-Ansible deployment). As a result, libvirt process requires some secret key to access Ceph cluster to attach to a block device. The secret can be used to establish a secure communication channel between Cinder and the RBD storage backend.

The RBD secret key installation is handled automatically when using Kolla-ansible for deployment.

Configure OpenStack Cinder for Ceph

You need to configure OpenStack cinder to interact with Ceph block storage. Pool name for the cinder volume block devices is required.

sudo vim /etc/kolla/config/cinder/cinder-volume.conf
[DEFAULT]
enabled_backends = ceph
glance_api_version = 2

[ceph]
rbd_user = cinder
rbd_secret_uuid = {{ cinder_rbd_secret_uuid }}
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_pool = cinder-volume
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = false
rbd_max_clone_depth = 5
rbd_store_chunk_size = 4
rados_connect_timeout = -1

The secret uuid will be obtained from Kolla-ansible passwords that we will generate at a later step.

Read more on cinder.conf.

Similarly, configure Cinder backup as follows;

sudo vim /etc/kolla/config/cinder/cinder-backup.conf
[DEFAULT]
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user=cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool=cinder-backup
backup_driver = cinder.backup.drivers.ceph.CephBackupDriver
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true

Save and exit the file.

Define Kolla-Ansible Global Deployment Options

In a Kolla-Ansible deployment, the globals.yml file is a key configuration file used to define global settings and parameters for the deployment. It contains variables that influence the behavior of Kolla-Ansible and its deployment of OpenStack services as Docker containers.

By default, this is how to the globals.yml configuration looks like;

cat /etc/kolla/globals.yml
---
# You can use this file to override _any_ variable throughout Kolla.
# Additional options can be found in the
# 'kolla-ansible/ansible/group_vars/all.yml' file. Default value of all the
# commented parameters are shown here, To override the default value uncomment
# the parameter and change its value.

###################
# Ansible options
###################

# This variable is used as the "filter" argument for the setup module.  For
# instance, if one wants to remove/ignore all Neutron interface facts:
# kolla_ansible_setup_filter: "ansible_[!qt]*"
# By default, we do not provide a filter.
#kolla_ansible_setup_filter: "{{ omit }}"

# This variable is used as the "gather_subset" argument for the setup module.
# For instance, if one wants to avoid collecting facts via facter:
# kolla_ansible_setup_gather_subset: "all,!facter"
# By default, we do not provide a gather subset.
#kolla_ansible_setup_gather_subset: "{{ omit }}"

# Dummy variable to allow Ansible to accept this file.
workaround_ansible_issue_8743: yes

# This variable is used as "any_errors_fatal" setting for the setup (gather
# facts) plays.
# This is useful for weeding out failing hosts early to avoid late failures
# due to missing facts (especially cross-host).
# Do note this still supports host fact caching and it will not affect
# scenarios with all facts cached (as there is no task to fail).
#kolla_ansible_setup_any_errors_fatal: false

###############
# Kolla options
###############
# Valid options are [ COPY_ONCE, COPY_ALWAYS ]
#config_strategy: "COPY_ALWAYS"

# Valid options are ['centos', 'debian', 'rocky', 'ubuntu']
#kolla_base_distro: "rocky"

# Do not override this unless you know what you are doing.
#openstack_release: "2023.1"

# Docker image tag used by default.
#openstack_tag: "{{ openstack_release ~ openstack_tag_suffix }}"

# Suffix applied to openstack_release to generate openstack_tag.
#openstack_tag_suffix: ""

# Location of configuration overrides
#node_custom_config: "{{ node_config }}/config"

# This should be a VIP, an unused IP on your network that will float between
# the hosts running keepalived for high-availability. If you want to run an
# All-In-One without haproxy and keepalived, you can set enable_haproxy to no
# in "OpenStack options" section, and set this value to the IP of your
# 'network_interface' as set in the Networking section below.
#kolla_internal_vip_address: "10.10.10.254"

# This is the DNS name that maps to the kolla_internal_vip_address VIP. By
# default it is the same as kolla_internal_vip_address.
#kolla_internal_fqdn: "{{ kolla_internal_vip_address }}"

# This should be a VIP, an unused IP on your network that will float between
# the hosts running keepalived for high-availability. It defaults to the
# kolla_internal_vip_address, allowing internal and external communication to
# share the same address.  Specify a kolla_external_vip_address to separate
# internal and external requests between two VIPs.
#kolla_external_vip_address: "{{ kolla_internal_vip_address }}"

# The Public address used to communicate with OpenStack as set in the public_url
# for the endpoints that will be created. This DNS name should map to
# kolla_external_vip_address.
#kolla_external_fqdn: "{{ kolla_external_vip_address }}"

# Optionally change the path to sysctl.conf modified by Kolla Ansible plays.
#kolla_sysctl_conf_path: /etc/sysctl.conf

################
# Container engine
################

# Valid options are [ docker ]
# kolla_container_engine: docker

################
# Docker options
################

# Custom docker registry settings:
#docker_registry:
# Please read the docs carefully before applying docker_registry_insecure.
#docker_registry_insecure: "no"
#docker_registry_username:
# docker_registry_password is set in the passwords.yml file.

# Namespace of images:
#docker_namespace: "kolla"

# Docker client timeout in seconds.
#docker_client_timeout: 120

#docker_configure_for_zun: "no"
#containerd_configure_for_zun: "no"
#containerd_grpc_gid: 42463

###################
# Messaging options
###################
# Whether to enable TLS for oslo.messaging communication with RabbitMQ.
#om_enable_rabbitmq_tls: "{{ rabbitmq_enable_tls | bool }}"
# CA certificate bundle in containers using oslo.messaging with RabbitMQ TLS.
#om_rabbitmq_cacert: "{{ rabbitmq_cacert }}"

##############################
# Neutron - Networking Options
##############################
# This interface is what all your api services will be bound to by default.
# Additionally, all vxlan/tunnel and storage network traffic will go over this
# interface by default. This interface must contain an IP address.
# It is possible for hosts to have non-matching names of interfaces - these can
# be set in an inventory file per host or per group or stored separately, see
#     http://docs.ansible.com/ansible/intro_inventory.html
# Yet another way to workaround the naming problem is to create a bond for the
# interface on all hosts and give the bond name here. Similar strategy can be
# followed for other types of interfaces.
#network_interface: "eth0"

# These can be adjusted for even more customization. The default is the same as
# the 'network_interface'. These interfaces must contain an IP address.
#kolla_external_vip_interface: "{{ network_interface }}"
#api_interface: "{{ network_interface }}"
#swift_storage_interface: "{{ network_interface }}"
#swift_replication_interface: "{{ swift_storage_interface }}"
#tunnel_interface: "{{ network_interface }}"
#dns_interface: "{{ network_interface }}"
#octavia_network_interface: "{{ api_interface }}"

# Configure the address family (AF) per network.
# Valid options are [ ipv4, ipv6 ]
#network_address_family: "ipv4"
#api_address_family: "{{ network_address_family }}"
#storage_address_family: "{{ network_address_family }}"
#swift_storage_address_family: "{{ storage_address_family }}"
#swift_replication_address_family: "{{ swift_storage_address_family }}"
#migration_address_family: "{{ api_address_family }}"
#tunnel_address_family: "{{ network_address_family }}"
#octavia_network_address_family: "{{ api_address_family }}"
#bifrost_network_address_family: "{{ network_address_family }}"
#dns_address_family: "{{ network_address_family }}"

# This is the raw interface given to neutron as its external network port. Even
# though an IP address can exist on this interface, it will be unusable in most
# configurations. It is recommended this interface not be configured with any IP
# addresses for that reason.
#neutron_external_interface: "eth1"

# Valid options are [ openvswitch, ovn, linuxbridge, vmware_nsxv, vmware_nsxv3, vmware_nsxp, vmware_dvs ]
# if vmware_nsxv3 or vmware_nsxp is selected, enable_openvswitch MUST be set to "no" (default is yes)
# Do note linuxbridge is *EXPERIMENTAL* in Neutron since Zed and it requires extra tweaks to config to be usable.
# For details, see: https://docs.openstack.org/neutron/latest/admin/config-experimental-framework.html
#neutron_plugin_agent: "openvswitch"

# Valid options are [ internal, infoblox ]
#neutron_ipam_driver: "internal"

# Configure Neutron upgrade option, currently Kolla support
# two upgrade ways for Neutron: legacy_upgrade and rolling_upgrade
# The variable "neutron_enable_rolling_upgrade: yes" is meaning rolling_upgrade
# were enabled and opposite
# Neutron rolling upgrade were enable by default
#neutron_enable_rolling_upgrade: "yes"

# Configure neutron logging framework to log ingress/egress connections to instances
# for security groups rules. More information can be found here:
# https://docs.openstack.org/neutron/latest/admin/config-logging.html
#enable_neutron_packet_logging: "no"

####################
# keepalived options
####################
# Arbitrary unique number from 0..255
# This should be changed from the default in the event of a multi-region deployment
# where the VIPs of different regions reside on a common subnet.
#keepalived_virtual_router_id: "51"

###################
# Dimension options
###################
# This is to provide an extra option to deploy containers with Resource constraints.
# We call it dimensions here.
# The dimensions for each container are defined by a mapping, where each dimension value should be a
# string.
# Reference_Docs
# https://docs.docker.com/config/containers/resource_constraints/
# eg:
# _dimensions:
#    blkio_weight:
#    cpu_period:
#    cpu_quota:
#    cpu_shares:
#    cpuset_cpus:
#    cpuset_mems:
#    mem_limit:
#    mem_reservation:
#    memswap_limit:
#    kernel_memory:
#    ulimits:

#####################
# Healthcheck options
#####################
#enable_container_healthchecks: "yes"
# Healthcheck options for Docker containers
# interval/timeout/start_period are in seconds
#default_container_healthcheck_interval: 30
#default_container_healthcheck_timeout: 30
#default_container_healthcheck_retries: 3
#default_container_healthcheck_start_period: 5

##################
# Firewall options
##################
# Configures firewalld on both ubuntu and centos systems
# for enabled services.
# firewalld should be installed beforehand.
# disable_firewall: "true"
# enable_external_api_firewalld: "false"
# external_api_firewalld_zone: "public"

#############
# TLS options
#############
# To provide encryption and authentication on the kolla_external_vip_interface,
# TLS can be enabled.  When TLS is enabled, certificates must be provided to
# allow clients to perform authentication.
#kolla_enable_tls_internal: "no"
#kolla_enable_tls_external: "{{ kolla_enable_tls_internal if kolla_same_external_internal_vip | bool else 'no' }}"
#kolla_certificates_dir: "{{ node_config }}/certificates"
#kolla_external_fqdn_cert: "{{ kolla_certificates_dir }}/haproxy.pem"
#kolla_internal_fqdn_cert: "{{ kolla_certificates_dir }}/haproxy-internal.pem"
#kolla_admin_openrc_cacert: ""
#kolla_copy_ca_into_containers: "no"
#haproxy_backend_cacert: "{{ 'ca-certificates.crt' if kolla_base_distro in ['debian', 'ubuntu'] else 'ca-bundle.trust.crt' }}"
#haproxy_backend_cacert_dir: "/etc/ssl/certs"

##################
# Backend options
##################
#kolla_httpd_keep_alive: "60"
#kolla_httpd_timeout: "60"

#####################
# Backend TLS options
#####################
#kolla_enable_tls_backend: "no"
#kolla_verify_tls_backend: "yes"
#kolla_tls_backend_cert: "{{ kolla_certificates_dir }}/backend-cert.pem"
#kolla_tls_backend_key: "{{ kolla_certificates_dir }}/backend-key.pem"

#####################
# ACME client options
#####################
# A list of haproxy backend server directives pointing to addresses used by the
# ACME client to complete http-01 challenge.
# Please read the docs for more details.
#acme_client_servers: []

################
# Region options
################
# Use this option to change the name of this region.
#openstack_region_name: "RegionOne"

# Use this option to define a list of region names - only needs to be configured
# in a multi-region deployment, and then only in the *first* region.
#multiple_regions_names: ["{{ openstack_region_name }}"]

###################
# OpenStack options
###################
# Use these options to set the various log levels across all OpenStack projects
# Valid options are [ True, False ]
#openstack_logging_debug: "False"

# Enable core OpenStack services. This includes:
# glance, keystone, neutron, nova, heat, and horizon.
#enable_openstack_core: "yes"

# These roles are required for Kolla to be operation, however a savvy deployer
# could disable some of these required roles and run their own services.
#enable_glance: "{{ enable_openstack_core | bool }}"
#enable_hacluster: "no"
#enable_haproxy: "yes"
#enable_keepalived: "{{ enable_haproxy | bool }}"
#enable_keystone: "{{ enable_openstack_core | bool }}"
#enable_mariadb: "yes"
#enable_memcached: "yes"
#enable_neutron: "{{ enable_openstack_core | bool }}"
#enable_nova: "{{ enable_openstack_core | bool }}"
#enable_rabbitmq: "{{ 'yes' if om_rpc_transport == 'rabbit' or om_notify_transport == 'rabbit' else 'no' }}"
#enable_outward_rabbitmq: "{{ enable_murano | bool }}"

# OpenStack services can be enabled or disabled with these options
#enable_aodh: "no"
#enable_barbican: "no"
#enable_blazar: "no"
#enable_ceilometer: "no"
#enable_ceilometer_ipmi: "no"
#enable_cells: "no"
#enable_central_logging: "no"
#enable_ceph_rgw: "no"
#enable_ceph_rgw_loadbalancer: "{{ enable_ceph_rgw | bool }}"
#enable_cinder: "no"
#enable_cinder_backup: "yes"
#enable_cinder_backend_hnas_nfs: "no"
#enable_cinder_backend_iscsi: "{{ enable_cinder_backend_lvm | bool }}"
#enable_cinder_backend_lvm: "no"
#enable_cinder_backend_nfs: "no"
#enable_cinder_backend_quobyte: "no"
#enable_cinder_backend_pure_iscsi: "no"
#enable_cinder_backend_pure_fc: "no"
#enable_cinder_backend_pure_roce: "no"
#enable_cloudkitty: "no"
#enable_collectd: "no"
#enable_cyborg: "no"
#enable_designate: "no"
#enable_destroy_images: "no"
#enable_etcd: "no"
#enable_fluentd: "yes"
#enable_freezer: "no"
#enable_gnocchi: "no"
#enable_gnocchi_statsd: "no"
#enable_grafana: "no"
#enable_grafana_external: "{{ enable_grafana | bool }}"
#enable_heat: "{{ enable_openstack_core | bool }}"
#enable_horizon: "{{ enable_openstack_core | bool }}"
#enable_horizon_blazar: "{{ enable_blazar | bool }}"
#enable_horizon_cloudkitty: "{{ enable_cloudkitty | bool }}"
#enable_horizon_designate: "{{ enable_designate | bool }}"
#enable_horizon_freezer: "{{ enable_freezer | bool }}"
#enable_horizon_heat: "{{ enable_heat | bool }}"
#enable_horizon_ironic: "{{ enable_ironic | bool }}"
#enable_horizon_magnum: "{{ enable_magnum | bool }}"
#enable_horizon_manila: "{{ enable_manila | bool }}"
#enable_horizon_masakari: "{{ enable_masakari | bool }}"
#enable_horizon_mistral: "{{ enable_mistral | bool }}"
#enable_horizon_murano: "{{ enable_murano | bool }}"
#enable_horizon_neutron_vpnaas: "{{ enable_neutron_vpnaas | bool }}"
#enable_horizon_octavia: "{{ enable_octavia | bool }}"
#enable_horizon_sahara: "{{ enable_sahara | bool }}"
#enable_horizon_senlin: "{{ enable_senlin | bool }}"
#enable_horizon_solum: "{{ enable_solum | bool }}"
#enable_horizon_tacker: "{{ enable_tacker | bool }}"
#enable_horizon_trove: "{{ enable_trove | bool }}"
#enable_horizon_vitrage: "{{ enable_vitrage | bool }}"
#enable_horizon_watcher: "{{ enable_watcher | bool }}"
#enable_horizon_zun: "{{ enable_zun | bool }}"
#enable_influxdb: "{{ enable_cloudkitty | bool and cloudkitty_storage_backend == 'influxdb' }}"
#enable_ironic: "no"
#enable_ironic_neutron_agent: "{{ enable_neutron | bool and enable_ironic | bool }}"
#enable_iscsid: "{{ enable_cinder | bool and enable_cinder_backend_iscsi | bool }}"
#enable_kuryr: "no"
#enable_magnum: "no"
#enable_manila: "no"
#enable_manila_backend_generic: "no"
#enable_manila_backend_hnas: "no"
#enable_manila_backend_cephfs_native: "no"
#enable_manila_backend_cephfs_nfs: "no"
#enable_manila_backend_glusterfs_nfs: "no"
#enable_mariabackup: "no"
#enable_masakari: "no"
#enable_mistral: "no"
#enable_multipathd: "no"
#enable_murano: "no"
#enable_neutron_vpnaas: "no"
#enable_neutron_sriov: "no"
#enable_neutron_dvr: "no"
#enable_neutron_qos: "no"
#enable_neutron_agent_ha: "no"
#enable_neutron_bgp_dragent: "no"
#enable_neutron_provider_networks: "no"
#enable_neutron_segments: "no"
#enable_neutron_sfc: "no"
#enable_neutron_trunk: "no"
#enable_neutron_metering: "no"
#enable_neutron_infoblox_ipam_agent: "no"
#enable_neutron_port_forwarding: "no"
#enable_nova_serialconsole_proxy: "no"
#enable_nova_ssh: "yes"
#enable_octavia: "no"
#enable_octavia_driver_agent: "{{ enable_octavia | bool and neutron_plugin_agent == 'ovn' }}"
#enable_opensearch: "{{ enable_central_logging | bool or enable_osprofiler | bool or (enable_cloudkitty | bool and cloudkitty_storage_backend == 'elasticsearch') }}"
#enable_opensearch_dashboards: "{{ enable_opensearch | bool }}"
#enable_opensearch_dashboards_external: "{{ enable_opensearch_dashboards | bool }}"
#enable_openvswitch: "{{ enable_neutron | bool and neutron_plugin_agent != 'linuxbridge' }}"
#enable_ovn: "{{ enable_neutron | bool and neutron_plugin_agent == 'ovn' }}"
#enable_ovs_dpdk: "no"
#enable_osprofiler: "no"
#enable_placement: "{{ enable_nova | bool or enable_zun | bool }}"
#enable_prometheus: "no"
#enable_proxysql: "no"
#enable_redis: "no"
#enable_sahara: "no"
#enable_senlin: "no"
#enable_skyline: "no"
#enable_solum: "no"
#enable_swift: "no"
#enable_swift_s3api: "no"
#enable_tacker: "no"
#enable_telegraf: "no"
#enable_trove: "no"
#enable_trove_singletenant: "no"
#enable_venus: "no"
#enable_vitrage: "no"
#enable_watcher: "no"
#enable_zun: "no"

##################
# RabbitMQ options
##################
# Options passed to RabbitMQ server startup script via the
# RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS environment var.
# See Kolla Ansible docs RabbitMQ section for details.
# These are appended to args already provided by Kolla Ansible
# to configure IPv6 in RabbitMQ server.
# More details can be found in the RabbitMQ docs:
# https://www.rabbitmq.com/runtime.html#scheduling
# https://www.rabbitmq.com/runtime.html#busy-waiting
# The default tells RabbitMQ to always use two cores (+S 2:2),
# and not to busy wait (+sbwt none +sbwtdcpu none +sbwtdio none):
#rabbitmq_server_additional_erl_args: "+S 2:2 +sbwt none +sbwtdcpu none +sbwtdio none"
# Whether to enable TLS encryption for RabbitMQ client-server communication.
#rabbitmq_enable_tls: "no"
# CA certificate bundle in RabbitMQ container.
#rabbitmq_cacert: "/etc/ssl/certs/{{ 'ca-certificates.crt' if kolla_base_distro in ['debian', 'ubuntu'] else 'ca-bundle.trust.crt' }}"

#################
# MariaDB options
#################
# List of additional WSREP options
#mariadb_wsrep_extra_provider_options: []

#######################
# External Ceph options
#######################
# External Ceph - cephx auth enabled (this is the standard nowadays, defaults to yes)
#external_ceph_cephx_enabled: "yes"

# Glance
#ceph_glance_keyring: "ceph.client.glance.keyring"
#ceph_glance_user: "glance"
#ceph_glance_pool_name: "images"
# Cinder
#ceph_cinder_keyring: "ceph.client.cinder.keyring"
#ceph_cinder_user: "cinder"
#ceph_cinder_pool_name: "volumes"
#ceph_cinder_backup_keyring: "ceph.client.cinder-backup.keyring"
#ceph_cinder_backup_user: "cinder-backup"
#ceph_cinder_backup_pool_name: "backups"
# Nova
#ceph_nova_keyring: "{{ ceph_cinder_keyring }}"
#ceph_nova_user: "nova"
#ceph_nova_pool_name: "vms"
# Gnocchi
#ceph_gnocchi_keyring: "ceph.client.gnocchi.keyring"
#ceph_gnocchi_user: "gnocchi"
#ceph_gnocchi_pool_name: "gnocchi"
# Manila
#ceph_manila_keyring: "ceph.client.manila.keyring"
#ceph_manila_user: "manila"

#############################
# Keystone - Identity Options
#############################

#keystone_admin_user: "admin"

#keystone_admin_project: "admin"

# Interval to rotate fernet keys by (in seconds). Must be an interval of
# 60(1 min), 120(2 min), 180(3 min), 240(4 min), 300(5 min), 360(6 min),
# 600(10 min), 720(12 min), 900(15 min), 1200(20 min), 1800(30 min),
# 3600(1 hour), 7200(2 hour), 10800(3 hour), 14400(4 hour), 21600(6 hour),
# 28800(8 hour), 43200(12 hour), 86400(1 day), 604800(1 week).
#fernet_token_expiry: 86400


########################
# Glance - Image Options
########################
# Configure image backend.
#glance_backend_ceph: "no"
#glance_backend_file: "yes"
#glance_backend_swift: "no"
#glance_backend_vmware: "no"
#enable_glance_image_cache: "no"
#glance_enable_property_protection: "no"
#glance_enable_interoperable_image_import: "no"
# Configure glance upgrade option.
# Due to this feature being experimental in glance,
# the default value is "no".
#glance_enable_rolling_upgrade: "no"

####################
# Osprofiler options
####################
# valid values: ["elasticsearch", "redis"]
#osprofiler_backend: "elasticsearch"

##################
# Barbican options
##################
# Valid options are [ simple_crypto, p11_crypto ]
#barbican_crypto_plugin: "simple_crypto"
#barbican_library_path: "/usr/lib/libCryptoki2_64.so"

#################
# Gnocchi options
#################
# Valid options are [ file, ceph, swift ]
#gnocchi_backend_storage: "{% if enable_swift | bool %}swift{% else %}file{% endif %}"

# Valid options are [redis, '']
#gnocchi_incoming_storage: "{{ 'redis' if enable_redis | bool else '' }}"

################################
# Cinder - Block Storage Options
################################
# Enable / disable Cinder backends
#cinder_backend_ceph: "no"
#cinder_backend_vmwarevc_vmdk: "no"
#cinder_backend_vmware_vstorage_object: "no"
#cinder_volume_group: "cinder-volumes"
# Valid options are [ '', redis, etcd ]
#cinder_coordination_backend: "{{ 'redis' if enable_redis|bool else 'etcd' if enable_etcd|bool else '' }}"

# Valid options are [ nfs, swift, ceph ]
#cinder_backup_driver: "ceph"
#cinder_backup_share: ""
#cinder_backup_mount_options_nfs: ""

#######################
# Cloudkitty options
#######################
# Valid option is gnocchi
#cloudkitty_collector_backend: "gnocchi"
# Valid options are 'sqlalchemy' or 'influxdb'. The default value is
# 'influxdb', which matches the default in Cloudkitty since the Stein release.
# When the backend is "influxdb", we also enable Influxdb.
# Also, when using 'influxdb' as the backend, we trigger the configuration/use
# of Cloudkitty storage backend version 2.
#cloudkitty_storage_backend: "influxdb"

###################
# Designate options
###################
# Valid options are [ bind9 ]
#designate_backend: "bind9"
#designate_ns_record:
#  - "ns1.example.org"
# Valid options are [ '', redis ]
#designate_coordination_backend: "{{ 'redis' if enable_redis|bool else '' }}"

########################
# Nova - Compute Options
########################
#nova_backend_ceph: "no"

# Valid options are [ qemu, kvm, vmware ]
#nova_compute_virt_type: "kvm"

# The number of fake driver per compute node
#num_nova_fake_per_node: 5

# The flag "nova_safety_upgrade" need to be consider when
# "nova_enable_rolling_upgrade" is enabled. The "nova_safety_upgrade"
# controls whether the nova services are all stopped before rolling
# upgrade to the new version, for the safety and availability.
# If "nova_safety_upgrade" is "yes", that will stop all nova services (except
# nova-compute) for no failed API operations before upgrade to the
# new version. And opposite.
#nova_safety_upgrade: "no"

# Valid options are [ none, novnc, spice ]
#nova_console: "novnc"

##############################
# Neutron - networking options
##############################
# Enable distributed floating ip for OVN deployments
#neutron_ovn_distributed_fip: "no"

# Enable DHCP agent(s) to use with OVN
#neutron_ovn_dhcp_agent: "no"

#############################
# Horizon - Dashboard Options
#############################
#horizon_backend_database: "{{ enable_murano | bool }}"

#############################
# Ironic options
#############################
# dnsmasq bind interface for Ironic Inspector, by default is network_interface
#ironic_dnsmasq_interface: "{{ network_interface }}"
# The following value must be set when enabling ironic, the value format is a
# list of ranges - at least one must be configured, for example:
# - range: 192.168.0.10,192.168.0.100
# See Kolla Ansible docs on Ironic for details.
#ironic_dnsmasq_dhcp_ranges:
# PXE bootloader file for Ironic Inspector, relative to /var/lib/ironic/tftpboot.
#ironic_dnsmasq_boot_file: "pxelinux.0"

# Configure ironic upgrade option, due to currently kolla support
# two upgrade ways for ironic: legacy_upgrade and rolling_upgrade
# The variable "ironic_enable_rolling_upgrade: yes" is meaning rolling_upgrade
# were enabled and opposite
# Rolling upgrade were enable by default
#ironic_enable_rolling_upgrade: "yes"

# List of extra kernel parameters passed to the kernel used during inspection
#ironic_inspector_kernel_cmdline_extras: []

# Valid options are [ '', redis, etcd ]
#ironic_coordination_backend: "{{ 'redis' if enable_redis|bool else 'etcd' if enable_etcd|bool else '' }}"

######################################
# Manila - Shared File Systems Options
######################################
# HNAS backend configuration
#hnas_ip:
#hnas_user:
#hnas_password:
#hnas_evs_id:
#hnas_evs_ip:
#hnas_file_system_name:

# CephFS backend configuration.
# External Ceph FS name.
# By default this is empty to allow Manila to auto-find the first FS available.
#manila_cephfs_filesystem_name:

# Gluster backend configuration
# The option of glusterfs share layout can be directory or volume
# The default option of share layout is 'volume'
#manila_glusterfs_share_layout:
# The default option of nfs server type is 'Gluster'
#manila_glusterfs_nfs_server_type:

# Volume layout Options (required)
# If the glusterfs server requires remote ssh, then you need to fill
# in 'manila_glusterfs_servers', ssh user 'manila_glusterfs_ssh_user', and ssh password
# 'manila_glusterfs_ssh_password'.
# 'manila_glusterfs_servers' value List of GlusterFS servers which provide volumes,
# the format is for example:
#   - 10.0.1.1
#   - 10.0.1.2
#manila_glusterfs_servers:
#manila_glusterfs_ssh_user:
#manila_glusterfs_ssh_password:
# Used to filter GlusterFS volumes for share creation.
# Examples: manila-share-volume-\\d+$, manila-share-volume-#{size}G-\\d+$;
#manila_glusterfs_volume_pattern:

# Directory layout Options
# If the glusterfs server is on the local node of the manila share,
# it’s of the format :/
# If the glusterfs server is on a remote node,
# it’s of the format @:/ ,
# and define 'manila_glusterfs_ssh_password'
#manila_glusterfs_target:
#manila_glusterfs_mount_point_base:

################################
# Swift - Object Storage Options
################################
# Swift expects block devices to be available for storage. Two types of storage
# are supported: 1 - storage device with a special partition name and filesystem
# label, 2 - unpartitioned disk  with a filesystem. The label of this filesystem
# is used to detect the disk which Swift will be using.

# Swift support two matching modes, valid options are [ prefix, strict ]
#swift_devices_match_mode: "strict"

# This parameter defines matching pattern: if "strict" mode was selected,
# for swift_devices_match_mode then swift_device_name should specify the name of
# the special swift partition for example: "KOLLA_SWIFT_DATA", if "prefix" mode was
# selected then swift_devices_name should specify a pattern which would match to
# filesystems' labels prepared for swift.
#swift_devices_name: "KOLLA_SWIFT_DATA"

# Configure swift upgrade option, due to currently kolla support
# two upgrade ways for swift: legacy_upgrade and rolling_upgrade
# The variable "swift_enable_rolling_upgrade: yes" is meaning rolling_upgrade
# were enabled and opposite
# Rolling upgrade were enable by default
#swift_enable_rolling_upgrade: "yes"

###################################
# VMware - OpenStack VMware support
###################################
#vmware_vcenter_host_ip:
#vmware_vcenter_host_username:
#vmware_vcenter_host_password:
#vmware_datastore_name:
#vmware_vcenter_name:
#vmware_vcenter_cluster_name:

############
# Prometheus
############
#enable_prometheus_server: "{{ enable_prometheus | bool }}"
#enable_prometheus_haproxy_exporter: "{{ enable_haproxy | bool }}"
#enable_prometheus_mysqld_exporter: "{{ enable_mariadb | bool }}"
#enable_prometheus_node_exporter: "{{ enable_prometheus | bool }}"
#enable_prometheus_cadvisor: "{{ enable_prometheus | bool }}"
#enable_prometheus_fluentd_integration: "{{ enable_prometheus | bool and enable fluentd | bool }}"
#enable_prometheus_memcached: "{{ enable_prometheus | bool }}"
#enable_prometheus_alertmanager: "{{ enable_prometheus | bool }}"
#enable_prometheus_alertmanager_external: "{{ enable_prometheus_alertmanager | bool }}"
#enable_prometheus_ceph_mgr_exporter: "no"
#enable_prometheus_openstack_exporter: "{{ enable_prometheus | bool }}"
#enable_prometheus_elasticsearch_exporter: "{{ enable_prometheus | bool and enable_elasticsearch | bool }}"
#enable_prometheus_blackbox_exporter: "{{ enable_prometheus | bool }}"
#enable_prometheus_libvirt_exporter: "{{ enable_prometheus | bool and enable_nova | bool and nova_compute_virt_type in ['kvm', 'qemu'] }}"
#enable_prometheus_etcd_integration: "{{ enable_prometheus | bool and enable_etcd | bool }}"
#enable_prometheus_msteams: "no"

# The labels to add to any time series or alerts when communicating with external systems (federation, remote storage, Alertmanager).
# prometheus_external_labels:
#   : 
# By default, prometheus_external_labels is empty
#prometheus_external_labels:

# List of extra parameters passed to prometheus. You can add as many to the list.
#prometheus_cmdline_extras:

# List of extra parameters passed to cAdvisor. By default system cgroups
# and container labels are not exposed to reduce time series cardinality.
#prometheus_cadvisor_cmdline_extras: "--docker_only --store_container_labels=false --disable_metrics=percpu,referenced_memory,cpu_topology,resctrl,udp,advtcp,sched,hugetlb,memory_numa,tcp,process"

# Extra parameters passed to Prometheus exporters.
#prometheus_blackbox_exporter_cmdline_extras:
#prometheus_elasticsearch_exporter_cmdline_extras:
#prometheus_haproxy_exporter_cmdline_extras:
#prometheus_memcached_exporter_cmdline_extras:
#prometheus_mysqld_exporter_cmdline_extras:
#prometheus_node_exporter_cmdline_extras:
#prometheus_openstack_exporter_cmdline_extras:

# Example of setting endpoints for prometheus ceph mgr exporter.
# You should add all ceph mgr's in your external ceph deployment.
#prometheus_ceph_mgr_exporter_endpoints:
#  - host1:port1
#  - host2:port2

#########
# Freezer
#########
# Freezer can utilize two different database backends, elasticsearch or mariadb.
# Elasticsearch is preferred, however it is not compatible with the version deployed
# by kolla-ansible. You must first setup an external elasticsearch with 2.3.0.
# By default, kolla-ansible deployed mariadb is the used database backend.
#freezer_database_backend: "mariadb"

##########
# Telegraf
##########
# Configure telegraf to use the docker daemon itself as an input for
# telemetry data.
#telegraf_enable_docker_input: "no"

##########################################
# Octavia - openstack loadbalancer Options
##########################################
# Whether to run Kolla Ansible's automatic configuration for Octavia.
# NOTE: if you upgrade from Ussuri, you must set `octavia_auto_configure` to `no`
# and keep your other Octavia config like before.
#octavia_auto_configure: yes

# Octavia amphora flavor.
# See os_nova_flavor for details. Supported parameters:
# - flavorid (optional)
# - is_public (optional)
# - name
# - vcpus
# - ram
# - disk
# - ephemeral (optional)
# - swap (optional)
# - extra_specs (optional)
#octavia_amp_flavor:
#  name: "amphora"
#  is_public: no
#  vcpus: 1
#  ram: 1024
#  disk: 5

# Octavia security groups. lb-mgmt-sec-grp is for amphorae.
#octavia_amp_security_groups:
#    mgmt-sec-grp:
#      name: "lb-mgmt-sec-grp"
#      rules:
#        - protocol: icmp
#        - protocol: tcp
#          src_port: 22
#          dst_port: 22
#        - protocol: tcp
#          src_port: "{{ octavia_amp_listen_port }}"
#          dst_port: "{{ octavia_amp_listen_port }}"

# Octavia management network.
# See os_network and os_subnet for details. Supported parameters:
# - external (optional)
# - mtu (optional)
# - name
# - provider_network_type (optional)
# - provider_physical_network (optional)
# - provider_segmentation_id (optional)
# - shared (optional)
# - subnet
# The subnet parameter has the following supported parameters:
# - allocation_pool_start (optional)
# - allocation_pool_end (optional)
# - cidr
# - enable_dhcp (optional)
# - gateway_ip (optional)
# - name
# - no_gateway_ip (optional)
# - ip_version (optional)
# - ipv6_address_mode (optional)
# - ipv6_ra_mode (optional)
#octavia_amp_network:
#  name: lb-mgmt-net
#  shared: false
#  subnet:
#    name: lb-mgmt-subnet
#    cidr: "{{ octavia_amp_network_cidr }}"
#    no_gateway_ip: yes
#    enable_dhcp: yes

# Octavia management network subnet CIDR.
#octavia_amp_network_cidr: 10.1.0.0/24

#octavia_amp_image_tag: "amphora"

# Load balancer topology options are [ SINGLE, ACTIVE_STANDBY ]
#octavia_loadbalancer_topology: "SINGLE"

# The following variables are ignored as along as `octavia_auto_configure` is set to `yes`.
#octavia_amp_image_owner_id:
#octavia_amp_boot_network_list:
#octavia_amp_secgroup_list:
#octavia_amp_flavor_id:

####################
# Corosync options
####################

# this is UDP port
#hacluster_corosync_port: 5405

To deploy multinode OpenStack and integrate with Ceph storage cluster, we will define our Kolla-ansible global deployment options as follows;

vim /etc/kolla/globals.yml
---
workaround_ansible_issue_8743: yes
config_strategy: "COPY_ALWAYS"
kolla_base_distro: "ubuntu"
openstack_release: "2023.1"
kolla_internal_vip_address: "192.168.200.254"
kolla_container_engine: docker
docker_configure_for_zun: "yes"
containerd_configure_for_zun: "yes"
docker_apt_package_pin: "5:20.*"
network_address_family: "ipv4"
neutron_plugin_agent: "openvswitch"
enable_openstack_core: "yes"
enable_glance: "{{ enable_openstack_core | bool }}"
enable_haproxy: "yes"
enable_keepalived: "{{ enable_haproxy | bool }}"
enable_keystone: "{{ enable_openstack_core | bool }}"
enable_mariadb: "yes"
enable_memcached: "yes"
enable_neutron: "{{ enable_openstack_core | bool }}"
enable_nova: "{{ enable_openstack_core | bool }}"
enable_rabbitmq: "{{ 'yes' if om_rpc_transport == 'rabbit' or om_notify_transport == 'rabbit' else 'no' }}"
enable_aodh: "yes"
enable_ceilometer: "yes"
enable_cinder: "yes"
enable_cinder_backup: "yes"
enable_etcd: "yes"
enable_gnocchi: "yes"
enable_gnocchi_statsd: "yes"
enable_grafana: "yes"
enable_heat: "{{ enable_openstack_core | bool }}"
enable_horizon: "{{ enable_openstack_core | bool }}"
enable_horizon_heat: "{{ enable_heat | bool }}"
enable_horizon_zun: "{{ enable_zun | bool }}"
enable_kuryr: "yes"
enable_prometheus: "yes"
enable_zun: "yes"
external_ceph_cephx_enabled: "yes"
ceph_glance_keyring: "ceph.client.glance.keyring"
ceph_glance_user: "glance"
ceph_glance_pool_name: "glance-images"
ceph_cinder_keyring: "ceph.client.cinder.keyring"
ceph_cinder_user: "cinder"
ceph_cinder_pool_name: "cinder-volume"
ceph_cinder_backup_keyring: "ceph.client.cinder-backup.keyring"
ceph_cinder_backup_user: "cinder-backup"
ceph_cinder_backup_pool_name: "cinder-backup"
glance_backend_ceph: "yes"
cinder_backend_ceph: "yes"

We have enabled Ceph Glance and Cinder backends using the options:

Glance:

  • glance_backend_ceph: “yes”
  • ceph_glance_keyring (default: client.glance.keyring)
  • ceph_glance_user (default: glance)
  • ceph_glance_pool_name (default: images)

Cinder:

  • ceph_cinder_keyring (default: client.cinder.keyring)
  • ceph_cinder_user (default: cinder)
  • ceph_cinder_pool_name (default: volumes)
  • ceph_cinder_backup_keyring (default: client.cinder-backup.keyring)
  • ceph_cinder_backup_user (default: cinder-backup)
  • ceph_cinder_backup_pool_name (default: backups)

Save and exit the globals configuration file. Update it to suit your environment setup.

Update Kolla-Ansible OpenStack Nodes Inventory File

Next, open the Kolla ansible inventory configuration file and update it accordingly nodes. For storage nodes, we will set it to both compute nodes and controller nodes. The backend will be set to Ceph as defined in the global configuration file.

See out multinode inventory below, for our OpenStack deployment with Ceph integration.

cat multinode | grep -vE "^#"
[control]
controller01 ansible_connection=local neutron_external_interface=vethext
controller[02:03] neutron_external_interface=vethext

[network]
controller01 ansible_connection=local neutron_external_interface=vethext network_interface=br0
controller[02:03] neutron_external_interface=vethext network_interface=br0

[compute]
compute[01:02] neutron_external_interface=enp7s0 network_interface=enp1s0

[monitoring]
controller01 ansible_connection=local neutron_external_interface=vethext
controller[02:03] neutron_external_interface=vethext

[storage]
controller01 ansible_connection=local neutron_external_interface=vethext network_interface=br0
controller[02:03] neutron_external_interface=vethext network_interface=br0
compute[01:02] neutron_external_interface=enp7s0 network_interface=enp1s0

[deployment]
controller01       ansible_connection=local

[baremetal:children]
control
network
compute
storage
monitoring

[tls-backend:children]
control

[common:children]
control
network
compute
storage
monitoring

[collectd:children]
compute

[grafana:children]
monitoring

[etcd:children]
control

[influxdb:children]
monitoring

[prometheus:children]
monitoring

[kafka:children]
control

[telegraf:children]
compute
control
monitoring
network
storage

[hacluster:children]
control

[hacluster-remote:children]
compute

[loadbalancer:children]
network

[mariadb:children]
control

[rabbitmq:children]
control

[outward-rabbitmq:children]
control

[monasca-agent:children]
compute
control
monitoring
network
storage

[monasca:children]
monitoring

[storm:children]
monitoring

[keystone:children]
control

[glance:children]
control

[nova:children]
control

[neutron:children]
network

[openvswitch:children]
network
compute
manila-share

[cinder:children]
control

[cloudkitty:children]
control

[freezer:children]
control

[memcached:children]
control

[horizon:children]
control

[swift:children]
control

[barbican:children]
control

[heat:children]
control

[murano:children]
control

[solum:children]
control

[ironic:children]
control

[magnum:children]
control

[sahara:children]
control

[mistral:children]
control

[manila:children]
control

[ceilometer:children]
control

[aodh:children]
control

[cyborg:children]
control
compute

[gnocchi:children]
control

[tacker:children]
control

[trove:children]
control

[senlin:children]
control

[vitrage:children]
control

[watcher:children]
control

[octavia:children]
control

[designate:children]
control

[placement:children]
control

[bifrost:children]
deployment

[zookeeper:children]
control

[zun:children]
control

[skyline:children]
control

[redis:children]
control

[blazar:children]
control

[venus:children]
monitoring


[cron:children]
common

[fluentd:children]
common

[kolla-logs:children]
common

[kolla-toolbox:children]
common

[opensearch:children]
control

[opensearch-dashboards:children]
opensearch

[glance-api:children]
glance

[nova-api:children]
nova

[nova-conductor:children]
nova

[nova-super-conductor:children]
nova

[nova-novncproxy:children]
nova

[nova-scheduler:children]
nova

[nova-spicehtml5proxy:children]
nova

[nova-compute-ironic:children]
nova

[nova-serialproxy:children]
nova

[neutron-server:children]
control

[neutron-dhcp-agent:children]
neutron

[neutron-l3-agent:children]
neutron

[neutron-metadata-agent:children]
neutron

[neutron-ovn-metadata-agent:children]
compute
network

[neutron-bgp-dragent:children]
neutron

[neutron-infoblox-ipam-agent:children]
neutron

[neutron-metering-agent:children]
neutron

[ironic-neutron-agent:children]
neutron

[neutron-ovn-agent:children]
compute
network

[cinder-api:children]
cinder

[cinder-backup:children]
storage

[cinder-scheduler:children]
cinder

[cinder-volume:children]
storage

[cloudkitty-api:children]
cloudkitty

[cloudkitty-processor:children]
cloudkitty

[freezer-api:children]
freezer

[freezer-scheduler:children]
freezer

[iscsid:children]
compute
storage
ironic

[tgtd:children]
storage

[manila-api:children]
manila

[manila-scheduler:children]
manila

[manila-share:children]
network

[manila-data:children]
manila

[swift-proxy-server:children]
swift

[swift-account-server:children]
storage

[swift-container-server:children]
storage

[swift-object-server:children]
storage

[barbican-api:children]
barbican

[barbican-keystone-listener:children]
barbican

[barbican-worker:children]
barbican

[heat-api:children]
heat

[heat-api-cfn:children]
heat

[heat-engine:children]
heat

[murano-api:children]
murano

[murano-engine:children]
murano

[monasca-agent-collector:children]
monasca-agent

[monasca-agent-forwarder:children]
monasca-agent

[monasca-agent-statsd:children]
monasca-agent

[monasca-api:children]
monasca

[monasca-log-persister:children]
monasca

[monasca-log-metrics:children]
monasca

[monasca-thresh:children]
monasca

[monasca-notification:children]
monasca

[monasca-persister:children]
monasca

[storm-worker:children]
storm

[storm-nimbus:children]
storm

[ironic-api:children]
ironic

[ironic-conductor:children]
ironic

[ironic-inspector:children]
ironic

[ironic-tftp:children]
ironic

[ironic-http:children]
ironic

[magnum-api:children]
magnum

[magnum-conductor:children]
magnum

[sahara-api:children]
sahara

[sahara-engine:children]
sahara

[solum-api:children]
solum

[solum-worker:children]
solum

[solum-deployer:children]
solum

[solum-conductor:children]
solum

[solum-application-deployment:children]
solum

[solum-image-builder:children]
solum

[mistral-api:children]
mistral

[mistral-executor:children]
mistral

[mistral-engine:children]
mistral

[mistral-event-engine:children]
mistral

[ceilometer-central:children]
ceilometer

[ceilometer-notification:children]
ceilometer

[ceilometer-compute:children]
compute

[ceilometer-ipmi:children]
compute

[aodh-api:children]
aodh

[aodh-evaluator:children]
aodh

[aodh-listener:children]
aodh

[aodh-notifier:children]
aodh

[cyborg-api:children]
cyborg

[cyborg-agent:children]
compute

[cyborg-conductor:children]
cyborg

[gnocchi-api:children]
gnocchi

[gnocchi-statsd:children]
gnocchi

[gnocchi-metricd:children]
gnocchi

[trove-api:children]
trove

[trove-conductor:children]
trove

[trove-taskmanager:children]
trove

[multipathd:children]
compute
storage

[watcher-api:children]
watcher

[watcher-engine:children]
watcher

[watcher-applier:children]
watcher

[senlin-api:children]
senlin

[senlin-conductor:children]
senlin

[senlin-engine:children]
senlin

[senlin-health-manager:children]
senlin

[octavia-api:children]
octavia

[octavia-driver-agent:children]
octavia

[octavia-health-manager:children]
octavia

[octavia-housekeeping:children]
octavia

[octavia-worker:children]
octavia

[designate-api:children]
designate

[designate-central:children]
designate

[designate-producer:children]
designate

[designate-mdns:children]
network

[designate-worker:children]
designate

[designate-sink:children]
designate

[designate-backend-bind9:children]
designate

[placement-api:children]
placement

[zun-api:children]
zun

[zun-wsproxy:children]
zun

[zun-compute:children]
compute

[zun-cni-daemon:children]
compute

[skyline-apiserver:children]
skyline

[skyline-console:children]
skyline

[tacker-server:children]
tacker

[tacker-conductor:children]
tacker

[vitrage-api:children]
vitrage

[vitrage-notifier:children]
vitrage

[vitrage-graph:children]
vitrage

[vitrage-ml:children]
vitrage

[vitrage-persistor:children]
vitrage

[blazar-api:children]
blazar

[blazar-manager:children]
blazar

[prometheus-node-exporter:children]
monitoring
control
compute
network
storage

[prometheus-mysqld-exporter:children]
mariadb

[prometheus-haproxy-exporter:children]
loadbalancer

[prometheus-memcached-exporter:children]
memcached

[prometheus-cadvisor:children]
monitoring
control
compute
network
storage

[prometheus-alertmanager:children]
monitoring

[prometheus-openstack-exporter:children]
monitoring

[prometheus-elasticsearch-exporter:children]
opensearch

[prometheus-blackbox-exporter:children]
monitoring

[prometheus-libvirt-exporter:children]
compute

[prometheus-msteams:children]
prometheus-alertmanager

[masakari-api:children]
control

[masakari-engine:children]
control

[masakari-hostmonitor:children]
control

[masakari-instancemonitor:children]
compute

[ovn-controller:children]
ovn-controller-compute
ovn-controller-network

[ovn-controller-compute:children]
compute

[ovn-controller-network:children]
network

[ovn-database:children]
control

[ovn-northd:children]
ovn-database

[ovn-nb-db:children]
ovn-database

[ovn-sb-db:children]
ovn-database

[venus-api:children]
venus

[venus-manager:children]
venus

Generate Kolla-Ansible Passwords

Kolla passwords.yml configuration file stores various OpenStack services passwords. You can automatically generate the password using the Kolla-ansible kolla-genpwd in your virtual environment.

Ensure that your virtual environment is activated;

source $HOME/kolla-ansible/bin/activate

Next, generate the passwords;

kolla-genpwd

All generated passwords will be populated to /etc/kolla/passwords.yml file.

Configure OpenStack Nova for Ceph

Similarly, Nova libvirt needs the secret key associated with the Ceph client to authenticate and access Cinder volumes stored in the Ceph cluster.

Extract Cinder secret key from the passwords file.

grep cinder_rbd_secret /etc/kolla/passwords.yml

Sample output;

cinder_rbd_secret_uuid: 4fce4d52-a2c0-467a-b7b7-949a0bf614c1

Thus, define this key on Nova configuration file as follows;

vim /etc/kolla/config/nova/nova-compute.conf
[libvirt]
rbd_secret_uuid = 4fce4d52-a2c0-467a-b7b7-949a0bf614c1

Save and exit the file.

Proceed to part 3 and final part of this series of how to integrating OpenStack with Ceph storage cluster.

Part 3: Integrating OpenStack with Ceph Storage Cluster

SUPPORT US VIA A VIRTUAL CUP OF COFFEE

We're passionate about sharing our knowledge and experiences with you through our blog. If you appreciate our efforts, consider buying us a virtual coffee. Your support keeps us motivated and enables us to continually improve, ensuring that we can provide you with the best content possible. Thank you for being a coffee-fueled champion of our work!

Photo of author
Kifarunix
Linux Certified Engineer, with a passion for open-source technology and a strong understanding of Linux systems. With experience in system administration, troubleshooting, and automation, I am skilled in maintaining and optimizing Linux infrastructure.

Leave a Comment