Ceph Luminous on a Raspberry Pi 3B

slb   March 18, 2019   Comments Off on Ceph Luminous on a Raspberry Pi 3B

I recently decided to poke around and see how difficult it was to install Ceph on a Raspberry Pi. While it has been done a bunch of times before, I wanted to see how the latest and greatest (Spring 2019-ish) code worked, and how much effort was required. The great news is that it’s almost turn-key at this point using an arm64 debian distribution. The luminous ceph packages are available and they work!

It’s not fast by any stretch, primarily because of the network-to-USB-attached-disk contention within the RPi . But it does work. Longer term, memory consumption is an issue . But I’ve let it idle for over a week and heartbeats/scrubs seem to run ok.

This whole setup becomes far more interesting when you consider it applied to one of the RPi alternatives, some of which have PCIe NVMe slots, and much faster networking and CPU. Especially when you consider the price trajectory of NVMe TLC and QLC storage devices!

I’m mostly just pasting the commands from my shell history here so I can use them another time.

Cookbook-level instructions

First, get the “Buster” beta for the raspberry pi and write it to an SD card. More info: https://itsfoss.com/debian-raspberry-pi/

As Root:


# Initial installation of packages not installed by default with Debian arm64 buster
 apt-get upgrade
 apt-get update
 apt-get update
 apt-get install lvm2 sudo gnupg gnupg2 gnupg1 lsb-base lsb-release

# Setup the ceph repo and add some more packages
 wget -q -O- 'https://download.ceph.com/keys/release.asc' |  apt-key add -
 echo deb https://download.ceph.com/debian/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
 echo $?
 apt clean
 apt-get  update
 apt-get install ceph-deploy
 adduser ceph-deploy
 echo "ceph-deploy ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-deploy

# /dev/sda is a USB attached SSD... I set it up with 4 partitions to do 4 OSDs
 fdisk /dev/sda
# had to reboot, kernel wouldn't see new partition table otherwise
 shutdown -r now

# For some reason, the distribution installed by ceph-deploy with the armhf repo setup above didn't install the ceph-volume systemd service.
 cd /lib/systemd/system/
 wget https://raw.githubusercontent.com/ceph/ceph/master/systemd/ceph-volume%40.service

As user ceph-deploy:


# Setup public key ssh access (to ourselves)
# Server is called 3pio in DNS.
 ssh-keygen
 ssh 3pio
 ssh-copy-id ceph-deploy@3pio
 ssh 3pio


# Use ceph-deploy to create a cluster (and install ceph software)
 mkdir testcluster
 cd tescluster
 ceph-deploy new 3pio
 vim ceph.conf
 ceph-deploy install --release luminous --no-adjust-repos 3pio
 ceph-deploy mon create-initial

# For some reason ceph-deploy doesn't copy the keyrings to /etc/ceph so I did this manually.
 sudo cp *keyring* /etc/ceph     sudo chmod o+rx /etc/ceph/*
 ceph-deploy osd create 3pio --data /dev/sda1
 ceph-deploy osd create 3pio --data /dev/sda2
 ceph-deploy osd create 3pio --data /dev/sda3
 ceph-deploy osd create 3pio --data /dev/sda4
 ceph-deploy admin 3pio
 ceph-deploy mgr create 3pio

# Ceph should be up, check it out and do a few things
 ceph -s
 ceph osd tree
 ceph osd pool create testpool 512  512 replicated 0
 ceph -s
 rados bench 15 write -p testpool -b 524288
 ceph -s
 ceph df
 ceph osd df

All done!

root@3pio:/home/pi# ceph -s
  cluster:
    id:     b6623e8d-a8f8-455f-83ca-1b49752c2a1e
    health: HEALTH_WARN
            application not enabled on 1 pool(s)

  services:
    mon: 1 daemons, quorum 3pio
    mgr: 3pio(active)
    osd: 4 osds: 4 up, 4 in

  data:
    pools:   1 pools, 128 pgs
    objects: 1.77k objects, 885MiB
    usage:   5.77GiB used, 194GiB / 200GiB avail
    pgs:     128 active+clean