<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Ceph on C&amp;CZ News</title>
    <link>https://cncz.science.ru.nl/tags/ceph/</link>
    <description>Recent content in Ceph on C&amp;CZ News</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <managingEditor>postmaster</managingEditor>
    <lastBuildDate>Mon, 15 Apr 2024 10:05:14 +0200</lastBuildDate><atom:link href="https://cncz.science.ru.nl/tags/ceph/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>CephFS retired</title>
      <link>https://cncz.science.ru.nl/news/2024-04-15_cephfs-retired/</link>
      <pubDate>Mon, 15 Apr 2024 10:05:14 +0200</pubDate>
      <author>Miek Gieben</author>
      <guid>https://cncz.science.ru.nl/news/2024-04-15_cephfs-retired/</guid>
      <description>&lt;p&gt;After the &lt;a href=&#34;https://cncz.science.ru.nl/cpk/1359/&#34; target=&#34;_blank&#34;&gt;storage downtime&lt;/a&gt; of CephFS in our Ceph cluster, we decided
together with the users of this service to stop offering CephFS shares through SMB and NFS.&lt;/p&gt;
&lt;p&gt;All data has been copied out of CephFS.&lt;/p&gt;
&lt;p&gt;We have ended the CephFS service. We will only use the Ceph cluster to store our backups, using the S3 API.&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Ceph storage interruption</title>
      <link>https://cncz.science.ru.nl/cpk/1302/</link>
      <pubDate>Mon, 24 Oct 2022 00:00:00 +0200</pubDate>
      <author>Simon Oosthoek</author>
      <guid>https://cncz.science.ru.nl/cpk/1302/</guid>
      <description>&lt;p&gt;Just before 23:09 quite a lot of ceph storage nodes became unreachable. This seems to be due to one of the redundant links between two datacenter locations failing for about 4 seconds. This triggered a whole slew of ceph osd processes being killed off and not starting again. A generic configuration change made for all our servers generated an extra interface, which confused some of the osd processes (depending on interface ordering) when starting up. We are reasonably confident we can avoid this from happening in the future.&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Ceph problem</title>
      <link>https://cncz.science.ru.nl/cpk/1282/</link>
      <pubDate>Wed, 24 Mar 2021 19:00:00 +0100</pubDate>
      <author></author>
      <guid>https://cncz.science.ru.nl/cpk/1282/</guid>
      <description>&lt;p&gt;During a routine upgrade of ceph, a bug in the latest version manifested
itself and made the ceph manager unreachable. After aborting the upgrade
and with help from the ceph-users mailinglist, everything became
available again using a workaround.&lt;/p&gt;</description>
    </item>
    
  </channel>
</rss>
