<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Storage on C&amp;CZ News</title>
    <link>https://cncz.science.ru.nl/tags/storage/</link>
    <description>Recent content in Storage on C&amp;CZ News</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <managingEditor>postmaster</managingEditor>
    <lastBuildDate>Thu, 13 Mar 2025 12:36:19 +0000</lastBuildDate><atom:link href="https://cncz.science.ru.nl/tags/storage/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Data storage</title>
      <link>https://cncz.science.ru.nl/howto/storage/</link>
      <pubDate>Thu, 13 Mar 2025 12:36:19 +0000</pubDate>
      <author>Bram Daams</author>
      <guid>https://cncz.science.ru.nl/howto/storage/</guid>
      <description>&lt;h1 id=&#34;introduction&#34;&gt;Introduction&lt;/h1&gt;
&lt;p&gt;In our Faculty, different types of data storage are offered:&lt;/p&gt;
&lt;table&gt;
  &lt;thead&gt;
      &lt;tr&gt;
          &lt;th&gt;Storage class&lt;/th&gt;
          &lt;th&gt;Description&lt;/th&gt;
          &lt;th&gt;Risk for data loss&lt;/th&gt;
      &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;a href=&#34;#home-directories&#34;&gt;Home directories&lt;/a&gt;&lt;/td&gt;
          &lt;td&gt;Small (several GB), reliable, backed up storage for individuals&lt;/td&gt;
          &lt;td&gt;low&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;a href=&#34;#network-shares&#34;&gt;Network shares&lt;/a&gt;&lt;/td&gt;
          &lt;td&gt;Larger (1-200 TB), reliable, backed up storage for individuals or groups&lt;/td&gt;
          &lt;td&gt;low&lt;/td&gt;
      &lt;/tr&gt;
      &lt;tr&gt;
          &lt;td&gt;&lt;a href=&#34;#local-storage&#34;&gt;Local storage&lt;/a&gt;&lt;/td&gt;
          &lt;td&gt;Not backed up storage on desktop computers and cluster nodes, likely to be lost in case of a hardware problem or when the machine gets reinstalled&lt;/td&gt;
          &lt;td&gt;high&lt;/td&gt;
      &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
&lt;h1 id=&#34;home-directories&#34;&gt;Home directories&lt;/h1&gt;
&lt;p&gt;Your Science login comes with a home directory of 5GB at no costs. This is a safe place to store you work related documents. Home directories are stored
on reliable hardware and &lt;a href=&#34;../backup&#34;&gt;backupped&lt;/a&gt; automatically. If you need more than 5GB of storage, and you can&amp;rsquo;t &lt;a href=&#34;../quota/#frequent-culprits-for-eating-your-quota&#34;&gt;clean up some data&lt;/a&gt;, it can be enlarged on &lt;a href=&#34;../contact&#34;&gt;request&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Access network shares via other network protocols</title>
      <link>https://cncz.science.ru.nl/news/2024-06-26-rclone/</link>
      <pubDate>Wed, 26 Jun 2024 15:33:06 +0200</pubDate>
      <author>Miek Gieben</author>
      <guid>https://cncz.science.ru.nl/news/2024-06-26-rclone/</guid>
      <description>&lt;p&gt;If you use &lt;a href=&#34;https://cncz.science.ru.nl/howto/storage/#network-shares&#34;&gt;C&amp;amp;CZ storage&lt;/a&gt;, you now have the new option
to access these via network protocols other than SMB (for Windows and macOS) and NFS (for Linux), namely:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;sftp&lt;/code&gt;: a secure variant of FTP&lt;/li&gt;
&lt;li&gt;&lt;code&gt;https&lt;/code&gt;: a (read-only) view of the share for browsers&lt;/li&gt;
&lt;li&gt;&lt;code&gt;webdavs&lt;/code&gt;: a (read-write) view of the share for browsers and webdav clients&lt;/li&gt;
&lt;li&gt;&lt;code&gt;s3&lt;/code&gt;: a read/write view using the Simple Storage Service (S3)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each of these may be active at the same time, and optionally can have an end date, so you can &amp;ldquo;open
up&amp;rdquo; a share for writes from another organisation which then automatically &amp;ldquo;closes&amp;rdquo; at the end date.
For &lt;code&gt;sftp&lt;/code&gt;, authentication is done via an SSH key.
The other protocols are authenticated with a username and password. These credentials are not tied to any person and are easily revoked or re-generated if the need arises.&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>ZFS Volumes</title>
      <link>https://cncz.science.ru.nl/howto/zfs-storage/</link>
      <pubDate>Fri, 12 Jan 2024 15:12:45 +0000</pubDate>
      <author>Miek Gieben</author>
      <guid>https://cncz.science.ru.nl/howto/zfs-storage/</guid>
      <description>&lt;p&gt;A new storage option has been developed by C&amp;amp;CZ. This page will show the technical details and
properties of this storage, which will hopefully help you make a decision if this is something you
want. Any storage we handout we call a &amp;ldquo;volume&amp;rdquo;. For these ZFS volumes:&lt;/p&gt;
&lt;dl&gt;
&lt;dt&gt;Size&lt;/dt&gt;
&lt;dd&gt;Storage size can be as large as you want, but smaller than the amount we can put in a single machine.
Currently (2024+) we can buy machines with roughly 140-200 TB storage.&lt;/dd&gt;
&lt;dt&gt;Snapshots&lt;/dt&gt;
&lt;dd&gt;On this we enable &lt;a href=&#34;../zfs-share-snapshot/&#34;&gt;snapshots&lt;/a&gt;, so that you can retrieve (deleted) files your self.&lt;/dd&gt;
&lt;dt&gt;Backup&lt;/dt&gt;
&lt;dd&gt;Each volume will be back upped to local S3 storage. This should be thought of as
disaster recovery, and restoring from the cloud should (hopefully) be rarely needed.&lt;/dd&gt;
&lt;/dl&gt;
&lt;p&gt;&lt;a href=&#34;https://dhz.science.ru.nl&#34; target=&#34;_blank&#34;&gt;All of these volumes are visible in DIY in the &amp;ldquo;Storage&amp;rdquo; section&lt;/a&gt; if you have access to them.
&lt;a href=&#34;https://cncz.science.ru.nl/nl/howto/storage/#bestellen&#34; target=&#34;_blank&#34;&gt;How to order such  storage can be seen here.&lt;/a&gt;&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Cleanup disk space</title>
      <link>https://cncz.science.ru.nl/howto/cleanup-disk-space/</link>
      <pubDate>Fri, 24 Nov 2023 12:17:00 +0000</pubDate>
      <author>Bram Daams</author>
      <guid>https://cncz.science.ru.nl/howto/cleanup-disk-space/</guid>
      <description>&lt;p&gt;You might have received an email that your personal or group network disk is almost full.
This articles describes some tools to identify large files/directories and cleanup disk
space.&lt;/p&gt;
&lt;h2 id=&#34;finding-large-files--directories-with-windirstat-graphical-windows&#34;&gt;Finding large files / directories with WinDirStat (graphical, Windows)&lt;/h2&gt;
&lt;p&gt;&lt;img loading=&#34;lazy&#34; src=&#34;https://cncz.science.ru.nl/img/2023/windirstat.png&#34; alt=&#34;windirstat screenshot&#34;  /&gt;
&lt;/p&gt;
&lt;p&gt;&lt;a href=&#34;https://windirstat.net&#34; target=&#34;_blank&#34;&gt;WinDirStat&lt;/a&gt; is a disk cleanup tool for Windows that provides a
visual representation of disk space usage. It helps you to quickly identify and
understand which files and folders are consuming the most space. The program offers
detailed information about each file, including type, last modified date, and file
path. In addition to visualization, WinDirStat includes built-in cleanup options,
enabling you to directly delete or move files within the program.&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Disk quota</title>
      <link>https://cncz.science.ru.nl/howto/quota/</link>
      <pubDate>Wed, 13 Sep 2023 17:07:00 +0000</pubDate>
      <author>Ben Polman</author>
      <guid>https://cncz.science.ru.nl/howto/quota/</guid>
      <description>&lt;p&gt;Your personal quota only determine the amount of disk space your own
data may occupy and the number of files and folders you may have on your
Windows &lt;code&gt;U:\&lt;/code&gt; disk or Linux home directory. Quotas may be active on other
disks as well, however not all disks have quotas. The mail servers also use
quotas to limit the disk space of each user’s mailbox.&lt;/p&gt;
&lt;h2 id=&#34;windows&#34;&gt;Windows&lt;/h2&gt;
&lt;p&gt;On Windows desktops where the home directory is available as U: or H:
disk right click on the disk, then choose &lt;code&gt;Properties&lt;/code&gt; to view the
amounts of used and available personal disk space in kbytes.&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Snapshots with ZFS</title>
      <link>https://cncz.science.ru.nl/howto/zfs-share-snapshot/</link>
      <pubDate>Wed, 05 Jul 2023 15:12:45 +0000</pubDate>
      <author>Miek Gieben</author>
      <guid>https://cncz.science.ru.nl/howto/zfs-share-snapshot/</guid>
      <description>&lt;p&gt;Network shares with &lt;a href=&#34;https://openzfs.org/wiki/Main_Page&#34; target=&#34;_blank&#34;&gt;ZFS&lt;/a&gt; as file system have the ability (which
C&amp;amp;CZ switches on by default) to make &lt;em&gt;snapshots&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;A &lt;em&gt;snapshot&lt;/em&gt; is a read-only copy of the share, made at a specific point in time. Such
snapshots allow you to find previous versions of your files. On most shares, snapshots go back
45 days. After that period they are removed.&lt;/p&gt;
&lt;div&gt;&lt;svg width=&#34;0&#34; height=&#34;0&#34; display=&#34;none&#34; xmlns=&#34;http://www.w3.org/2000/svg&#34;&gt;&lt;symbol id=&#34;tip-notice&#34; viewBox=&#34;0 0 512 512&#34; preserveAspectRatio=&#34;xMidYMid meet&#34;&gt;&lt;path d=&#34;M504 256c0 136.967-111.033 248-248 248S8 392.967 8 256 119.033 8 256 8s248 111.033 248 248zM227.314 387.314l184-184c6.248-6.248 6.248-16.379 0-22.627l-22.627-22.627c-6.248-6.249-16.379-6.249-22.628 0L216 308.118l-70.059-70.059c-6.248-6.248-16.379-6.248-22.628 0l-22.627 22.627c-6.248 6.248-6.248 16.379 0 22.627l104 104c6.249 6.249 16.379 6.249 22.628.001z&#34;/&gt;&lt;/symbol&gt;&lt;symbol id=&#34;note-notice&#34; viewBox=&#34;0 0 512 512&#34; preserveAspectRatio=&#34;xMidYMid meet&#34;&gt;&lt;path d=&#34;M504 256c0 136.997-111.043 248-248 248S8 392.997 8 256C8 119.083 119.043 8 256 8s248 111.083 248 248zm-248 50c-25.405 0-46 20.595-46 46s20.595 46 46 46 46-20.595 46-46-20.595-46-46-46zm-43.673-165.346l7.418 136c.347 6.364 5.609 11.346 11.982 11.346h48.546c6.373 0 11.635-4.982 11.982-11.346l7.418-136c.375-6.874-5.098-12.654-11.982-12.654h-63.383c-6.884 0-12.356 5.78-11.981 12.654z&#34;/&gt;&lt;/symbol&gt;&lt;symbol id=&#34;warning-notice&#34; viewBox=&#34;0 0 576 512&#34; preserveAspectRatio=&#34;xMidYMid meet&#34;&gt;&lt;path d=&#34;M569.517 440.013C587.975 472.007 564.806 512 527.94 512H48.054c-36.937 0-59.999-40.055-41.577-71.987L246.423 23.985c18.467-32.009 64.72-31.951 83.154 0l239.94 416.028zM288 354c-25.405 0-46 20.595-46 46s20.595 46 46 46 46-20.595 46-46-20.595-46-46-46zm-43.673-165.346l7.418 136c.347 6.364 5.609 11.346 11.982 11.346h48.546c6.373 0 11.635-4.982 11.982-11.346l7.418-136c.375-6.874-5.098-12.654-11.982-12.654h-63.383c-6.884 0-12.356 5.78-11.981 12.654z&#34;/&gt;&lt;/symbol&gt;&lt;symbol id=&#34;info-notice&#34; viewBox=&#34;0 0 512 512&#34; preserveAspectRatio=&#34;xMidYMid meet&#34;&gt;&lt;path d=&#34;M256 8C119.043 8 8 119.083 8 256c0 136.997 111.043 248 248 248s248-111.003 248-248C504 119.083 392.957 8 256 8zm0 110c23.196 0 42 18.804 42 42s-18.804 42-42 42-42-18.804-42-42 18.804-42 42-42zm56 254c0 6.627-5.373 12-12 12h-88c-6.627 0-12-5.373-12-12v-24c0-6.627 5.373-12 12-12h12v-64h-12c-6.627 0-12-5.373-12-12v-24c0-6.627 5.373-12 12-12h64c6.627 0 12 5.373 12 12v100h12c6.627 0 12 5.373 12 12v24z&#34;/&gt;&lt;/symbol&gt;&lt;/svg&gt;&lt;/div&gt;&lt;div class=&#34;notice tip&#34; &gt;
&lt;p class=&#34;first notice-title&#34;&gt;&lt;span class=&#34;icon-notice baseline&#34;&gt;&lt;svg&gt;&lt;use href=&#34;#tip-notice&#34;&gt;&lt;/use&gt;&lt;/svg&gt;&lt;/span&gt;Tip&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Mounting a network share</title>
      <link>https://cncz.science.ru.nl/howto/mount-a-network-share/</link>
      <pubDate>Mon, 16 Jan 2023 15:12:45 +0000</pubDate>
      <author>Bram Daams</author>
      <guid>https://cncz.science.ru.nl/howto/mount-a-network-share/</guid>
      <description>&lt;h1 id=&#34;what-path-to-mount&#34;&gt;What path to mount?&lt;/h1&gt;
&lt;p&gt;The &lt;a href=&#34;../storage&#34;&gt;storage page&lt;/a&gt; describes the paths for your &lt;a href=&#34;../storage#home-directory-paths&#34;&gt;home directory&lt;/a&gt; and &lt;a href=&#34;../storage#network-share-paths&#34;&gt;network shares&lt;/a&gt;. Once you know what network share to mount, follow the steps for your operating system below.&lt;/p&gt;
&lt;div class=&#34;notice info&#34; &gt;
&lt;p class=&#34;first notice-title&#34;&gt;&lt;span class=&#34;icon-notice baseline&#34;&gt;&lt;svg&gt;&lt;use href=&#34;#info-notice&#34;&gt;&lt;/use&gt;&lt;/svg&gt;&lt;/span&gt;Info&lt;/p&gt;&lt;p&gt;A &lt;a href=&#34;../vpn&#34;&gt;VPN&lt;/a&gt; connection is required when accessing these paths from outside of the RU network.&lt;/p&gt;&lt;/div&gt;

&lt;blockquote&gt;
&lt;p&gt;If you have trouble connecting to a network share after following the steps below, do not hessitate to &lt;a href=&#34;../contact&#34;&gt;contact us&lt;/a&gt;. We&amp;rsquo;ll be happy to assist you!&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Ceph storage interruption</title>
      <link>https://cncz.science.ru.nl/cpk/1302/</link>
      <pubDate>Mon, 24 Oct 2022 00:00:00 +0200</pubDate>
      <author>Simon Oosthoek</author>
      <guid>https://cncz.science.ru.nl/cpk/1302/</guid>
      <description>&lt;p&gt;Just before 23:09 quite a lot of ceph storage nodes became unreachable. This seems to be due to one of the redundant links between two datacenter locations failing for about 4 seconds. This triggered a whole slew of ceph osd processes being killed off and not starting again. A generic configuration change made for all our servers generated an extra interface, which confused some of the osd processes (depending on interface ordering) when starting up. We are reasonably confident we can avoid this from happening in the future.&lt;/p&gt;</description>
    </item>
    
    <item>
      <title>Ceph data storage expanded</title>
      <link>https://cncz.science.ru.nl/news/2021-11-23_ceph-data-storage-expanded/</link>
      <pubDate>Tue, 23 Nov 2021 14:06:00 +0100</pubDate>
      <author>Wim Janssen</author>
      <guid>https://cncz.science.ru.nl/news/2021-11-23_ceph-data-storage-expanded/</guid>
      <description>&lt;p&gt;Since the end of 2019, C&amp;amp;CZ offers
&lt;a href=&#34;https://wiki.cncz.science.ru.nl/index.php?title=Diskruimte&amp;amp;setlang=en#Ceph_Storage&#34; target=&#34;_blank&#34;&gt;Ceph&lt;/a&gt;
as a choice for data storage, next to the traditional RAID storage on
individual fileservers. Because demand for Ceph storage is rising, the
Ceph storage has been upgraded from 1.8 PiB to 2.8 PiB gross. With this
upgrade, storage servers have been placed in a central RU datacenter, so
the Ceph storage is now distributed over three datacenters. If you want
to try whether Ceph suits you, please &lt;a href=&#34;https://cncz.science.ru.nl/howto/contact/&#34;&gt;contact C&amp;amp;CZ&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    
  </channel>
</rss>
