Uninstalling and Resetting Ceph on Proxmox

I recently have been playing around with Ceph storage on Proxmox and ran into an interesting issue that took a little bit of digging to figure out. I plan on publishing a full walk through on getting Ceph up and running but for now I thought I’d publish an article on my steps for Uninstalling it and resetting it on an existing cluster.

Originally, when I started play with Proxmox, I tried building a Ceph storage cluster on my Dell R630s. This didn’t work seeing how Ceph doesn’t really work on RAID controller setups. I decided to scrap the idea. Fast forward to current after moving my Cluster over to Minisforum MS-A2 mini PC’s and Ceph storage is a great way to build fast and reliable storage.

I went to do the configuration after upgrading to Proxmox 9 and adding dedicated storage drives and my old configuration for Ceph, still pointing at my R630’s was still there. It had followed my Cluster during the conversion.

In this article, I’m going to outline the steps and commands I had to use to reset my Ceph configuration back to default.


First thing we need to do is make sure Ceph isn’t running. I did this by fully uninstalling it just to make sure it was level set across all my nodes. I came across this forum post where the user “Rares” outlines the commands to uninstall Ceph and reset most of the configuration. You will need to go into a Shell for each node and run these commands:

rm -rf /etc/systemd/system/ceph*
killall -9 ceph-mon ceph-mgr ceph-mds
rm -rf /var/lib/ceph/mon/  /var/lib/ceph/mgr/  /var/lib/ceph/mds/
pveceph purge
apt -y purge ceph-mon ceph-osd ceph-mgr ceph-mds
rm /etc/init.d/ceph
for i in $(apt search ceph | grep installed | awk -F/ '{print $1}'); do apt reinstall $i; done
dpkg-reconfigure ceph-base
dpkg-reconfigure ceph-mds
dpkg-reconfigure ceph-common
dpkg-reconfigure ceph-fuse
for i in $(apt search ceph | grep installed | awk -F/ '{print $1}'); do apt reinstall $i; done

Once you do that and go back to the Proxmox web interface, click on the Ceph menu under each node, you should see the prompt to “Install Ceph?”. Don’t install it yet, we need to delete a file to reset the configuration.

Go back to the Shell on one of the nodes and issue the following commands to go to the correct folder and remove the existing configuration:

cd ..
cd etc
cd pve
rm -r ceph.conf

Once you do this on one node, it should remove the file from all other nodes in the cluster.

Now that the configuration has been reset, you should be able to go back to the Proxmox web interface and do a normal, clean Ceph installation. I plan on posting an article soon on that whole process so stay tuned.

Related Posts

Proxmox – Setup Secondary Linux Bridge on a Secondary Physical Network Interface

This week my family and I decided to take a vacation so I thought I would do a quick article however and important article for your Proxmox…

VMware ESXi to Proxmox Virtual Machine Migration Integration

I’m a little late to the game on this one but since I’m covering everything I’m doing with my migration from VMware to Proxmox, I had to…

Proxmox – Reduce Quorum to 1 Node

Shorter article for this week as I was sick during the first part of the week and had some family events the second part. I still want…

Proxmox – Setup NFS Share and Configure Backups

On the road to having a great Proxmox setup, once you start deploying virtual machines the next and most critical step is to setup a Backup system…

Proxmox – Setup Cluster and Migrate Virtual Machine

Continuing with my Proxmox deployment and migration from ESX, this week I wanted to start moving towards a fully High Availability (HA) setup with shared storage. The…

Creating a Virtual Machine in Proxmox

If you haven’t read through my first article on Proxmox last week and why I’m switching to it for my home lab hypervisor, be sure to check…

Leave a Reply

Your email address will not be published. Required fields are marked *