Installing ceph using kolla-ansible for all-in-one setup

╄→尐↘猪︶ㄣ 提交于 2020-12-07 07:15:09

问题


I am trying to deploy the all-in-one configuration using kolla-ansible with ceph enabled

enable_ceph: "yes"
#enable_ceph_mds: "no"
enable_ceph_rgw: "yes"
#enable_ceph_nfs: "no"
enable_ceph_dashboard: "{{ enable_ceph | bool }}"
#enable_chrony: "yes"
enable_cinder: "yes"
enable_cinder_backup: "yes"
glance_backend_ceph: "yes"
gnocchi_backend_storage: "{{ 'ceph' if enable_ceph|bool else 'file' }}"
cinder_backend_ceph: "{{ enable_ceph }}"
cinder_backup_driver: "ceph"
nova_backend_ceph: "{{ enable_ceph }}"

And, my setup consists of a Virtual Box VM with Ubuntu 18.04.4 desktop version with 2 CPU cores, 30 GB Disk (single disk), 2GB RAM, the partitioning type is msdos.

ansible version==2.9.7

kolla-ansible version==9.1.0

In order to install ceph OSD using kolla-ansible i read that a partition should have the name KOLLA_CEPH_OSD_BOOTSTRAP_BS.

Hence, i created root partition with 20 GB i.e /dev/sda1 and then an extended partition /dev/sda2 for the rest 20GB and followed by two logical partitions (/dev/sda5 and /dev/sda6) each of 10GB for OSD. But in msdos type partitioning there is no feature to allocate name to partitions.

So my questions are:

  1. How do I go about labeling the partition in case of msdos type partition in order for kolla-ansible to recognize that /dev/sda5 and /dev/sda6 is designated for Ceph-OSD ?
  2. Is it a must to have a separate storage drive than the one containing Operating System for Ceph OSD (i know its not recommended to have all in single disk) ?
  3. How do I have to provision my single drive HD space in order to install Ceph-OSD using kolla-ansible?

P.S: I also tried to install ceph using kolla-ansible using an OpenStack VM (4 CPU cores, 80GB disk space - single drive, as i didn"t install Cinder in my OpenStack infra.) and Ubuntu 18.04.4 Cloud image, which has GPT partition type and supports naming partitions. The partitions were as follows:

/dev/vda1 for root partition

/dev/vda2 for ceph OSD

/dev/vda3 for ceph OSD

But the drawback was that, kolla-ansible wiped out complete disk and resulted in failure in installation.

Any help is highly appreciated. Thanks a lot in advance.


回答1:


I also had installed an Kolla-Ansible single-node all-in-one with Ceph as storage backend, so I had the same problem.

Yes, the bluestore installation of the ceph doesn't work with a single partition. I had also tried different ways of labeling, but for me it only worked with a whole disk, instead of a partition. So for your virtual setup create a whole new disk, for example /dev/vdb.

For labeling, I used the following as bash-script:

#!/bin/bash
DEV="/dev/vdb"
(
echo g # create GPT partition table
echo n # new partiton
echo   # partition number (automatic)
echo   # start sector (automatic)
echo +10G # end sector (use 10G size)
echo w # write changes
) | fdisk $DEV
parted $DEV -- name 1 KOLLA_CEPH_OSD_BOOTSTRAP_BS

Be aware, that DEV at the beginning is correctly set for your setup. This creates a new partiton table and one partition on the new disc with 10GB size. The kolla-ansible deploy-run register the label and wipe the whole disc, so the size-value has nothing to say and is only for the temporary partition on the disc.

One single disc is enough for the Ceph-OSD in kolla-ansible. You don't need a second OSD. For this, add the following config-file in your kolla-ansible setup in this path, when you used the default kolla installation path: /etc/kolla/config/ceph.conf with the content:

[global]
osd pool default size = 1
osd pool default min size = 1

This is to make sure, that there is only one OSD requested by kolla-ansible. If your kolla-directory with the globals.yml is not under /etc/kolla/, you have to change the path of the config-file as well.

Solution for setup with one single disc with multiple partitions is to switch the storage-type of the ceph-storage in the kolla-ansible setup from bluestore to the older filestore OSD type. This also requires different partition-labels like written here: https://docs.openstack.org/kolla-ansible/rocky/reference/ceph-guide.html#using-an-external-journal-drive . With filestore you need one parition with the label KOLLA_CEPH_OSD_BOOTSTRAP_FOO and a small journal-partition with label KOLLA_CEPH_OSD_BOOTSTRAP_FOO_J (the FOO in the name is really required...). To be able to switch your kolla-installation to filestore OSD type, edit all-in-one file's [storage] section by adding ceph_osd_store_type=filestore next to the host as follows, to override the default bluestore.

[storage]
localhost       ansible_connection=local ceph_osd_store_type=filestore

The above method has been tested with ansible==2.9.7 and kolla-ansible==9.1.0 and OpenStack Train release and prior releases.



来源:https://stackoverflow.com/questions/61795737/installing-ceph-using-kolla-ansible-for-all-in-one-setup

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!