ComputingResourcesUserGuide

Version 2.1 by sendels on 2019/05/15 09:18

Brief User Guide to Computing Resources at FLASH


  • Brief User Guide to Computing Resources at FLASH
    • CAMP at BL1: hasfumidaq
    • Maxwell HPC Cluster
    • Gamma Portal
    • Flash Control System



CAMP at BL1: hasfumidaq

TL;DR: Use the Maxwell clusterhttp://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif instead.

The CAMPhttp://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif cooperation used to operate a server for data acquisition and online analysis named hasfumidaq (HASylab Flash Ultra fast Molecular Imaging DAQ). Currently the server runs as a backup only. It is going to be decommissioned, most probably early 2020.


Hardware and Operating System

The machine hasfumidaq is a 12 core Dell Precision T5500 with 12 GByte RAM and more than 80 TByte disk space running a GNU/Linux, Ubuntu 16.04, operating system.

The local disk space of hasfumidaq is not under any backup regime, ie. what is deleted or overwritten here is gone for good. The storage layout is as follows:


~/

AFS network

programs and precious data needing backup

/var/acqu/

local disk

short term data acquisition files

/var/data/

local disk

midterm data storage and reproducible analysis

NFS network

longer term data storage for off line analysis

For data acquisition /var/acqu/ is exported via NFS to hasfcamppnccd1 and hasfcamppnccd2. There also is a CIFS (SAMBA) export of /var/acqu/ for user bl1acqus. Yet the latter is supposed to be used for MS Windows data acquisition nodes only.

Note, there is a caveat with the PNFS (perfectly normal file system) disk: Files are not modifiable on this file system, once they are created.


User Login

Hasfumidaq provides logins to AFS account owners, who are members of the groups hasylab or cfel. Accordingly, you have access to your DESY AFS home directory on hasfumidaq.

If you use hasfumidaq make sure you do not interfere with data acquisition at BL1, i.e. do not occupy many processor cores at full load and do not incur heavy network traffic. Mind, a single Matlab GUI creates a lot of network traffic already. For offline data analysis use the Petra III workgroup cluster instead. Please, do use your personal account working on hasfumidaq. The functional account "bl1user" here should be restricted to data acquisition and the account "vuvfuser" to online data analysis for guests, who do not have a DESY AFS account.


Software and Administration

For online or quasi online data analysis the usual set of scientific software is installed on hasfumidaq: Python with all the number crunching libraries, Java, HDFView in a current version, h5tools, Matlab R2013b and the DOOCS tools and libraries. If you need more, talk to the admins.

Administrators of hasfumidaq are the staff at FS-EC (Blume, Fleck, Rothkirch), the staff at IT (Flemming et al.), the CAMP staff (Passow, Erk) and the IT people at FS-FL (Düsterer, Grunewald, Sendel).


Maxwell HPC Cluster

The Maxwell HPC cluster consists of a set of 64 core servers with 512 GByte of RAM each. They run a GNU/Linux of CentOS Linux 7 (Core) flavour. You login to max-fsc and are connected to the server with lowest load. Please see the docs on confluence for detailshttp://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif.

Your AFS home directory should be available as a link at ~/afs/. You have an ASAP3 home directory common to all Maxwell nodes.

On all max-fsc nodes you have access to desy.de/flash1/disk//pnfs/ just as on hasfumidaq. For intermediate storage of large files use the /scratch space local to each node.

Also the ASAP3 Core File System is mounted on all Maxwell nodes. It is accessible at /asap3/flash/gpfs///data//. Please refer to the documentation on confluencehttp://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif.

The cluster also is the location your files will be staged to, if you access them through the Gamma Portal.


Gamma Portal

The Gamma Portalhttp://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif is a web front end to DESY's dCache tape archive.

Your data are migrated upon request to the responsible beam line scientist to the tape archive. The archive is organised along beamtime application numbers and provides elaborate access control.

You access your data using your DOOR account. The web front end allows you to download your data. Also, if you have a DESY AFS account, it permits you to stage the data. That means a copy of your data is created on the PNFS disk for analysis on the Petra III workgroup cluster.


Flash Control System

In addition to the services outlined above, DESY's group MCS-4 provides data acquisition services related to accelerator operations and beam diagnostics. For details you are referred to Stefan's Information about the FLASH DAQ system pages.

For online DAQ access and analysis you login at flashlxuser1 or flashlxuser2, using one of the functional accounts.