The Hoof & Paw
DocsCategoriesTagsView the current conditions from the WolfspyreLabs WeatherstationToggle Dark/Light/Auto modeToggle Dark/Light/Auto modeToggle Dark/Light/Auto modeBack to homepage

Enabling RADOS


Enabling the Ceph Rados Gateway

The RedHat Production Grade OSG Guide1 has a LOT of useful info. I’d strongly encourage you to take some time to look through it to familiarize yourself with the moving pieces. There’s a LOT going on in there.

Setup

Secrets

Create the radosgw keyring
ceph-authtool --create-keyring /etc/pve/priv/ceph.client.radosgw.keyring
Do This on each node!
symlink the keyring to a location ceph knows to look
ln -s /etc/pve/priv/ceph.client.admin.keyring   /etc/ceph/ceph.client.admin.keyring
ln -s /etc/pve/priv/ceph.client.radosgw.keyring /etc/ceph/ceph.client.radosgw.keyring
Node keys
Create a radosgw client key for each node
ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-40 --gen-key
ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-41 --gen-key
ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-42 --gen-key
ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-43 --gen-key
ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-44 --gen-key
ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-45 --gen-key
Privileges
Create the privilege tokens
Grant privilege to each of the newly minted keys
TARGET='/etc/pve/priv/ceph.client.radosgw.keyring'
for H in 40 41 42 43 44 45; do
  CLIENT="client.radosgw.px-m-${H}"
  ceph-authtool -n  ${CLIENT}  --cap osd 'allow rwx' --cap mon 'allow rwx' ${TARGET}
  echo "added privileges to ${CLIENT} in ${TARGET}"
done
Add the newly minted auth tokens to the cluster

Using the admin keyring, add the newly minted tokens to the cluster.

Add the new keys to the cluster
ADMINKEY='/etc/pve/priv/ceph.client.admin.keyring'
TARGET='/etc/pve/priv/ceph.client.radosgw.keyring'
for H in 40 41 42 43 44 45; do
  CLIENT="client.radosgw.px-m-${H}"
  ceph -k ${ADMINKEY} auth add ${CLIENT} -i ${TARGET}
done
Output
added key for client.radosgw.px-m-40
added key for client.radosgw.px-m-41
added key for client.radosgw.px-m-42
added key for client.radosgw.px-m-43
added key for client.radosgw.px-m-44
added key for client.radosgw.px-m-45

Config

Update /etc/services

Adding RADOSgw to /etc/services makes various systemic tools aware of the most likely use is, and displays the service in question when viewing nework connection states.

/etc/services
radosgw         7480/tcp                        # Ceph Rados gw

Adjusting Thread Cache Memory

The RH Guidance2 is to adjust Ceph’s TCMalloc setting to tune how much memory is allocated for ceph’s thread cache..

in RHEL/CentOS this is adjusted in /etc/sysconfig/ceph. However, ProxMox is based on Debian / Ubuntu
The “default” config dir there is /etc/default/ as such the file to inspect is /etc/default/ceph.

When I looked, it was already set to what I believe to be an acceptable level:

Inspecting Ceph TCMalloc setting
root@px-m-41:/tmp/ceph-px-m-41#  more /etc/default/ceph
# /etc/default/ceph
#
# Environment file for ceph daemon systemd unit files.
#

# Increase tcmalloc cache size
TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES=134217728

Increase Systemic Limits

The RH Guidance3 is to increase the file descriptor limits for the ceph user in /etc/security/limits.conf:

/etc/security/limits.conf
ceph             soft    nproc           unlimited

Adjust ceph config file

Insert the following blob into the [global] section of /etc/ceph/ceph.conf:

add to the global section of /etc/ceph/ceph.conf
  rgw_dns_name = dog.wolfspyre.io
  rgw_relaxed_s3_bucket_names = true
  rgw_resolve_cname = true
  rgw_log_nonexistent_bucket = true
  rgw_enable_ops_log = true
  rgw_enable_usage_log = true
  osd_map_message_max=10
  objecter_inflight_ops = 24576
  rgw_thread_pool_size = 512
references: 4 5 6 7 8 9 10

Things I considered adjusting, but didn’t

I thought about setting these in the config file, but left them at their defaults (most of which undefined)

add to the global section of /etc/ceph/ceph.conf
rgw_default_region_info_oid
rgw_default_zone_info_oid
rgw_default_zonegroup_info_oid
rgw_realm
rgw_realm_id
rgw_realm_id_oid
rgw_region
rgw_region_root_pool
rgw_zone
rgw_zone_id
rgw_zone_root_pool
rgw_zonegroup
rgw_zonegroup_id
rgw_zonegroup_root_pool

Appending this to /etc/ceph/ceph.conf is required in order to have RADCOS work

/etc/ceph/ceph.conf
[client.radosgw.px-m-40]
        host = px-m-40
        keyring = /etc/pve/priv/ceph.client.radosgw.keyring
        log file = /var/log/ceph/client.radosgw.$host.log
        rgw_dns_name = dog.wolfspyre.io

[client.radosgw.px-m-41]
        host = px-m-41
        keyring = /etc/pve/priv/ceph.client.radosgw.keyring
        log file = /var/log/ceph/client.radosgw.$host.log
        rgw_dns_name = dog.wolfspyre.io

[client.radosgw.px-m-42]
        host = px-m-42
        keyring = /etc/pve/priv/ceph.client.radosgw.keyring
        log file = /var/log/ceph/client.radosgw.$host.log
        rgw_dns_name = dog.wolfspyre.io

[client.radosgw.px-m-43]
        host = px-m-43
        keyring = /etc/pve/priv/ceph.client.radosgw.keyring
        log file = /var/log/ceph/client.radosgw.$host.log
        rgw_dns_name = dog.wolfspyre.io

[client.radosgw.px-m-44]
        host = px-m-44
        keyring = /etc/pve/priv/ceph.client.radosgw.keyring
        log file = /var/log/ceph/client.radosgw.$host.log
        rgw_dns_name = dog.wolfspyre.io

[client.radosgw.px-m-45]
        host = px-m-45
        keyring = /etc/pve/priv/ceph.client.radosgw.keyring
        log file = /var/log/ceph/client.radosgw.$host.log
        rgw_dns_name = dog.wolfspyre.io

Installation and Service Enablement

Wahoo! ya did it! now lets go ahead and install the necessary packages an start the services!

Package Installation

apt-get install radosgw librados2-perl python3-rados librados2 librgw2

Service enablement

systemctl enable radosgw
service radosgw start

Making radosgw start properly.

This seemed to be necessary to get radosgw starting after the shared filesystem is mounted.

Adjusting Radosgw Start sequence
mkdir /lib/systemd/system/radosgw.service.d/
cat <<EOF > /lib/systemd/system/radosgw.service.d/ceph-after-pve-cluster.conf
[Unit]
After=pve-cluster.service
EOF
ln -s /lib/systemd/system/ceph-radosgw.target /etc/systemd/system/ceph.target.wants/ceph-radosgw.target
…… Kinda anticlimactic, huh?
Well… On with the show!