1. Overview
Ceph is an open-source software-defined storage platform that implements object storage on a single distributed computer cluster and provides 3-in-1 interfaces for object, block, and file-level storage. A Ceph Storage Cluster is a collection of Ceph Monitors, Ceph Managers, Ceph Metadata Servers, and OSDs that work together to store and replicate data for use by applications, Ceph Users, and Ceph Clients. Ceph also provides an option for using PostgreSQL to use network storage.
In this blog post, I will guide you through a simple process to set up an all-in-one Ceph storage cluster on a single machine so that you can use this setup to run simple experimental storage-related development or testing.
2. Setting up Ceph Storage Cluster
A production-ready Ceph Storage Cluster should include all the necessary components to properly manage the storage, but this blog post aims to provide a simple setup for developers to quickly experience a network storage cluster.
2.1. Setting up the basic environment
In this section, I used a single CentOS 7 virtual machine running on VirtualBox and added three Virtual Hard Disks (VHDs). The storage settings for this virtual machine look like the picture below:

Once you have logged into CentOS, add a new user named ceph by running the commands below:
$ sudo useradd -d /home/ceph -m ceph
$ sudo passwd ceph
$ echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
$ sudo chmod 0440 /etc/sudoers.d/ceph
$ su - ceph
Then install the basic packages using the commands below. Docker is used as the container and NTP is used to synchronize the clock:
$ sudo yum update
$ sudo yum install lvm2 docker ntp ntpdate ntp-doc python3
Check the status to make sure both Docker and NTPD are running properly:
$ sudo systemctl status ntpd
$ sudo systemctl status docker
2.2. Setting up the Cluster Monitor
There are many tools that can be used to manage the Ceph storage cluster, but we will use the simple cephadm as our management tool.
First, change to the home directory and download cephadm using the commands below:
$ cd ~
$ curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
$ chmod +x cephadm
Next, add the Octopus release repository and install cephadm:
$ sudo ./cephadm add-repo --release octopus
$ sudo ./cephadm install
Now, use the cephadm bootstrap procedure to set up the first monitor daemon in the Ceph Storage Cluster. Replace 192.168.0.134 with your actual server IP address. We will use the simple username and password admin/password to simplify the setup, otherwise, cephadm will generate a random password and ask you to change it during the first-time login.
$ sudo ./cephadm bootstrap --mon-ip 192.168.0.134 --dashboard-password-noupdate --initial-dashboard-user admin --initial-dashboard-password password
If everything is running smoothly, you should see some messages like the ones below.
Ceph Dashboard is now available at:
URL: https://localhost.localdomain:8443/
User: admin
Password: password
You can access the Ceph CLI with:
sudo ./cephadm shell --fsid 1117491a-ca74-11ed-87fc-080027a1115e -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/docs/master/mgr/telemetry/
After setting up the cluster monitor, use a web browser to log in and check the current status. A typical dashboard will be displayed like the one shown below:

You can also install some common tools to check the Ceph storage cluster using the command line:
$ sudo cephadm install ceph-common
Then, check the status using the command below and you should see output similar to the following:
$ sudo ceph status
cluster:
id: 1117491a-ca74-11ed-87fc-080027a1115e
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:
mon: 1 daemons, quorum localhost.localdomain (age 9m)
mgr: localhost.localdomain.trwhwk(active, since 8m)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
2.3. Setting up the Object Storage Devices
As you can see in ceph status, we have only one monitor is running in this cluster, but 0 osds (object storage devices). Before we adding object storage devices to this storage cluster, let’s use lsblk to check our current disks status.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 80G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 79G 0 part
├─centos-root 253:0 0 49.8G 0 lvm /
├─centos-swap 253:1 0 5G 0 lvm [SWAP]
└─centos-home 253:2 0 24.3G 0 lvm /home
sdb 8:16 0 30G 0 disk
sdc 8:32 0 30G 0 disk
sdd 8:48 0 30G 0 disk
sr0 11:0 1 1024M 0 rom
The above is the output of the lsblk command, which shows the current disk status of the system. It lists all the available block devices, including their names, major and minor device numbers, size, type, and mount point (if any). In this case, we can see that there are three extra disks (sdb, sdc, and sdd) that are not being used yet.
Next, let’s use the Ceph device management command to check if these three disks can be added.
$ sudo ceph orch device ls
Hostname Path Type Serial Size Health Ident Fault Available
localhost.localdomain /dev/sdb hdd VBc50561fa-4658471a 32.2G Unknown N/A N/A Yes
localhost.localdomain /dev/sdc hdd VBc775712c-0dbb8b17 32.2G Unknown N/A N/A Yes
localhost.localdomain /dev/sdd hdd VB4dfb0df0-a8f28653 32.2G Unknown N/A N/A Yes
All three disks show “Yes” in the “Available” column, indicating that we can add them to the storage cluster. To add these devices to the cluster, we need to run the following commands:
$ sudo ceph orch apply osd --all-available-devices
Scheduled osd.all-available-devices update...
The output indicates that the devices are being processed and it may take some time. Please wait for a few seconds or minutes, and then check the ceph cluster status.
$ sudo ceph status
cluster:
id: 1117491a-ca74-11ed-87fc-080027a1115e
health: HEALTH_OK
services:
mon: 1 daemons, quorum localhost.localdomain (age 12m)
mgr: localhost.localdomain.trwhwk(active, since 11m)
osd: 3 osds: 3 up (since 20s), 3 in (since 20s); 1 remapped pgs
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 87 GiB / 90 GiB avail
pgs: 1 active+undersized+remapped
progress:
Rebalancing after osd.1 marked in (18s)
[............................]
We can now see that 3 OSDs have been added to this Ceph storage cluster. If we run the “lsblk” command again, we should see something like the output below, indicating that sdb, sdc, and sdd are now being used by the Ceph storage cluster.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 80G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 79G 0 part
├─centos-root 253:0 0 49.8G 0 lvm /
├─centos-swap 253:1 0 5G 0 lvm [SWAP]
└─centos-home 253:2 0 24.3G 0 lvm /home
sdb 8:16 0 30G 0 disk
└─ceph--7062ccde--bf6b--4252--ba0a--a66e02c0839d-osd--block--5ca780b0--b4a3--49c3--b58d--3aba26b88d14 253:3 0 30G 0 lvm
sdc 8:32 0 30G 0 disk
└─ceph--c38c357f--71ef--4641--acd4--8cb7bdf53520-osd--block--ae0bc25d--299e--45f0--af4f--890263970f1d 253:4 0 30G 0 lvm
sdd 8:48 0 30G 0 disk
└─ceph--debbc90a--860a--4e8f--9e93--a3c6e48e7994-osd--block--ccd3d288--f87f--4fa3--983a--102d838ea2a1 253:5 0 30G 0 lvm
sr0
If your Ceph storage cluster’s health status remains in the HEALTH_WARN state, you can try running the commands described in this article to resolve the issue.
After the workaround fix, your ceph storage cluster should be in HEALTH_OK stage.
$ sudo ceph status
cluster:
id: 1117491a-ca74-11ed-87fc-080027a1115e
health: HEALTH_OK
services:
mon: 1 daemons, quorum localhost.localdomain (age 17m)
mgr: localhost.localdomain.trwhwk(active, since 16m)
osd: 3 osds: 3 up (since 5m), 3 in (since 5m)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 87 GiB / 90 GiB avail
pgs: 1 active+clean
2.4. Test Ceph Storage Cluster locally
Setting up a Ceph storage cluster is primarily intended for network storage use cases, but we will consider different access network scenarios for block, object, and file systems later on. Here, we will create a Cephfs file system, mount it locally, and perform a simple test.
To create a Cephfs file system, run the command below and verify it:
$ sudo ceph fs volume create cephfs
$ sudo ceph fs volume ls
[
{
"name": "cephfs"
}
]
To mount a cephfs file system locally,
1), create a secret for cephfs by running the following command:
$ sudo ceph fs authorize cephfs client.user / rw | sudo tee /etc/ceph/ceph.client.user.keyring
[client.user]
key = AQCcFR5kLAgrERAAdfgLAUqdGvRxghJrLUkBMw==
This command will create a secret and write it to a keyring file.
2), mount the cephfs file system by running the following command:
$ sudo mount -t ceph :/ /media/cephfs -o name=user,secret=AQCcFR5kLAgrERAAdfgLAUqdGvRxghJrLUkBMw==
This command will mount the cephfs file system to the /media/cephfs directory.
3), change the ownership of the mounted directory to ceph:ceph by running the following command:
$ sudo chown -R ceph:ceph /media/cephfs/
4) Create a file in the mounted directory and verify the content:
$ echo "hello ceph storage cluster" > /media/cephfs/hello.txt
$ cat /media/cephfs/hello.txt
hello ceph storage cluster
At this point, you can try to mount this Ceph storage cluster on another CentOS machine using some commands like the ones below. However, we will cover this topic in the next blog post.
ssh ceph@192.168.0.134 'sudo ceph-authtool -p /etc/ceph/ceph.client.admin.keyring' > client.admin.key
sudo mount -t ceph 192.168.0.134:/ /mnt/cephfs -o name=admin,secretfile=client.admin.key
3. Summary
In this blog post, we covered the process of setting up a simple Ceph storage cluster on a single virtual machine and performing a basic test on the Cephfs storage cluster. We created a Ceph user, set up the Ceph monitor and manager, added object storage devices, and created a Cephfs file system. We then mounted the Cephfs file system locally, created a file, and read its contents to verify that the mount was successful. In future posts, we will explore more advanced use cases, such as running PostgreSQL on the all-in-one Ceph storage cluster using the Cephfs file system.

A software developer specialized in C/C++ programming with experience in hardware, firmware, software, database, network, and system architecture. Now, working in HighGo Software Inc, as a senior PostgreSQL architect.
Recent Comments