CephFS retired   news

After the storage downtime of CephFS in our Ceph cluster, we decided together with the users of this service to stop offering CephFS shares through SMB and NFS. All data has been copied out of CephFS. We have ended the CephFS service. We will only use the Ceph cluster to store our backups, using the S3 API.

Ceph storage interruption   cpk

Just before 23:09 quite a lot of ceph storage nodes became unreachable. This seems to be due to one of the redundant links between two datacenter locations failing for about 4 seconds. This triggered a whole slew of ceph osd processes being killed off and not starting again. A generic configuration change made for all our servers generated an extra interface, which confused some of the osd processes (depending on interface ordering) when starting up....

Ceph problem   cpk

During a routine upgrade of ceph, a bug in the latest version manifested itself and made the ceph manager unreachable. After aborting the upgrade and with help from the ceph-users mailinglist, everything became available again using a workaround.

Updated Oct 26, 2022  ·  Miek Gieben · Created Mar 24, 2021 · 

New data storage: Ceph   news

Recently C&CZ presented an extra choice for data storage, next to the traditional RAID storage on individual fileservers: Ceph. The main advantages of Ceph are, that is scales almost without limits and does not have a single point of failure, but instead is “self-healing” and “self-managing”. Within Ceph storage, choices can be made, the more expensive ones have no problem when a complete datacenter goes down. The Ceph storage can also be used as S3-storage....