Noctua-FileSystems

Aus PC2 Doc
Wechseln zu: Navigation, Suche

File systems on Noctua

The cluster provides two types of file systems:

  1. A shared file system located on an external filer. This file system is available on all clusters and on all systems running in the University network.
  2. A cluster-local parallel file system (Lustre). It provides low-latency access, high bandwidths and parallel file access.

The quota and used storage of your project for the different file systems is shown in the output of pc2status.

Environment Variable Source Purpose Initial Quota Backup
HOME Filer Permanent small data. Per user account 5 GB Yes
PC2DATA Filer Permanent project data (e.g. program binaries, final results). Per project. Mounted read-only on the compute nodes and read-write on the frontends. Requested at project application Yes
PC2PFS parallel Lustre file system Temporary working data. Per project Requested at project application NO
PC2SCRATCH Filer Temporary working data. Per project Requested at project application NO
PC2SW Filer/Cluster Pre-installed software. Read only None Yes

Depending on you project, some of these file systems might not be available.

File systems on compute nodes of Noctua

There are NO Node-local disks (except on the FPGA-nodes). Compute jobs as well as I/O-intensive jobs should use the Lustre-file system, i.e., PC2PFS.

Please also note, that the temporary directory /tmp on the nodes is a ramdisk. That means that the files written to this directory reside in the main memory of the node and reduce the main memory that is available to your programs. Thus, it is not recommended to use /tmp.

Export of Lustre File System

Please Note, as of October 2nd 2019, the exports of the parallel file systems of Noctua are not functional. We are working on a solution. For updates consult Noctua/Problems/CIFS_and_NFS_Exports.

The Lustre file system can be mounted on your personal computer on two different ways, by CIFS or NFS4'' protocol. To do this, your computer must be connected to the university network. The address of the gateway is either

  • lus-gw-1.cr2018.pc2.uni-paderborn.de or
  • lus-gw-2.cr2018.pc2.uni-paderborn.de

For both protocols you have to use your IMT credentials to establish the mount.



CIFS

1.) Windows
To access the LustreFS from Windows open the File Explorer and enter \\lus-gw-1.cr2018.pc2.uni-paderborn.de\scratch\ in the navigation bar.
Username: Your-IMT-username@AD.UNI-PADERBORN.DE
Password: IMT password

2.) OsX
You can access LustreFS on OsX from the Finder with the menu Go to/Connect with server (shortcut CMD+K).
Enter as server address in the shown window smb://lus-gw-1.cr2018.pc2.uni-paderborn.de/scratch/
Username: Your-IMT-username@AD.UNI-PADERBORN.DE
Password: IMT password

3.) Linux
A sftp like access to LustreFS is possible by installing smbclient on your computer and issue the following command:

smbclient //lus-gw-1.cr2018.pc2.uni-paderborn.de/scratch -U Your-IMT-username -W AD.UNI-PADERBORN.DE

When asked for a password use your your IMT password



In order to be able to mount the Lustre file system on a Linux computer you need the cifs utilities. They can be installed from the packages of your linux distribution, e.g. on Ubuntu or Debian with "apt-get install cifs-utils". You can then mount /scratch ($PC2PFS) to your local directory MOUNTPOINT with as root:

mount -t cifs //lus-gw-1.cr2018.pc2.uni-paderborn.de/scratch MOUNTPOINT -o username=Your-IMT-username,domain=AD.UNI-PADERBORN.DE

You can also add the following to /etc/fstab to make a permanent mount

//lus-gw-1.cr2018.pc2.uni-paderborn.de/scratch MOUNTPOINT cifs username=Your-IMT-username 0 0

You will be asked for your password at boot.


NFS4

To mount LustreFS via NFS4 protocol you need a Kerberos Ticket and mount Lustre to MOUNTPOINT

kinit Your-IMT-username@UNI-PADERBORN.DE
mount -t nfs -o vers=4,sec=krb5 lus-gw-1.cr2018.pc2.uni-paderborn.de:/ MOUNTPOINT

Copy files with rsync

Beside the possibility to mount the lustre filesystem on your pc via CIFS or NFS, you can transfer files from and to it via rsync and ssh proxyjump:

rsync -azv -e 'ssh -J <your-username>@fe.noctua.pc2.uni-paderborn.de' <your-files> <your-username>@fe-1.cr2018.pc2.uni-paderborn.de:/scratch/<path>

As an alternative to rsync, you can use scp:

scp -o 'ProxyJump <your-username>@fe.noctua.pc2.uni-paderborn.de' <your-files> <your-username>@fe-1.cr2018.pc2.uni-paderborn.de:/scratch/<path>

You can use both of our frontends (fe-1.cr2018.pc2.uni-paderborn.de and fe-2.cr2018.pc2.uni-paderborn.de) as the target. This method works from outside the university network as well.