Wiki source code of ComputingResourcesUserGuide

Version 5.1 by sendels on 2019/08/09 10:27

Show last authors
1 = Brief User Guide to Computing Resources at FLASH =
2
3 \\
4
5 * Brief User Guide to Computing Resources at FLASH
6 ** CAMP at BL1: hasfumidaq
7 ** Maxwell HPC Cluster
8 ** Gamma Portal
9 ** Flash Control System
10
11 \\
12
13 \\
14
15 == CAMP at BL1: hasfumidaq ==
16
17 **TL;DR:** Use the [[Maxwell cluster~[~[image:url:http://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif~|~|width="13" height="12"~]~]>>url:https://confluence.desy.de/display/IS/Maxwell||shape="rect"]] instead.
18
19 The [[CAMP~[~[image:url:http://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif~|~|width="13" height="12"~]~]>>url:http://photon-science.desy.de/facilities/flash/beamlines/bl_beamlines_flash1/camp/index_eng.html||shape="rect"]] cooperation used to operate a server for data acquisition and online analysis named **hasfumidaq**{{code language="none"}}{{/code}} (HASylab Flash Ultra fast Molecular Imaging DAQ). Currently the server runs as a backup only. It is going to be decommissioned, most probably early 2020.
20
21 \\
22
23 === Hardware and Operating System ===
24
25 The machine {{code language="none"}}hasfumidaq{{/code}} is a 12 core Dell Precision T5500 with 12 GByte RAM and more than 80 TByte disk space running a GNU/Linux, Ubuntu 16.04, operating system.
26
27 The local disk space of {{code language="none"}}hasfumidaq{{/code}} is **not** under any backup regime, ie. what is deleted or overwritten here is gone for good. The storage layout is as follows:
28
29 \\
30
31 (% class="wrapped" %)
32 |=(((
33 mount point
34 )))|=(((
35 type
36 )))|=(((
37 purpose
38 )))
39 |(((
40 {{code language="none"}}
41 ~/
42 {{/code}}
43 )))|(((
44 AFS network
45 )))|(((
46 programs and precious data needing backup
47 )))
48 |(((
49 {{code language="none"}}
50 /var/acqu/
51 {{/code}}
52 )))|(((
53 local disk
54 )))|(((
55 short term data acquisition files
56 )))
57 |(((
58 {{code language="none"}}
59 /var/data/
60 {{/code}}
61 )))|(((
62 local disk
63 )))|(((
64 midterm data storage and reproducible analysis
65 )))
66 |(((
67 {{code language="none"}}
68 /pnfs/desy.de/flash1/disk/
69 {{/code}}
70 )))|(((
71 NFS network
72 )))|(((
73 longer term data storage for off line analysis
74 )))
75
76 For data acquisition {{code language="none"}}/var/acqu/{{/code}} is exported via NFS to {{code language="none"}}hasfcamppnccd1{{/code}} and {{code language="none"}}hasfcamppnccd2{{/code}}. There also is a CIFS (SAMBA) export of {{code language="none"}}/var/acqu/{{/code}} for user bl1acqus. Yet the latter is supposed to be used for MS Windows data acquisition nodes only.
77
78 Note, there is a caveat with the PNFS (perfectly normal file system) disk: Files are not modifiable on this file system, once they are created.
79
80 \\
81
82 === User Login ===
83
84 {{code language="none"}}Hasfumidaq{{/code}} provides logins to AFS account owners, who are members of the groups {{code language="none"}}hasylab{{/code}} or {{code language="none"}}cfel{{/code}}. Accordingly, you have access to your DESY AFS home directory on {{code language="none"}}hasfumidaq{{/code}}.
85
86 If you use {{code language="none"}}hasfumidaq{{/code}} make sure you do not interfere with data acquisition at BL1, i.e. do not occupy many processor cores at full load and do not incur heavy network traffic. Mind, a single Matlab GUI creates a lot of network traffic already. For offline data analysis use the Petra III workgroup cluster instead. Please, do use your **personal account** working on {{code language="none"}}hasfumidaq{{/code}}. The functional account "bl1user" here should be restricted to data acquisition and the account "vuvfuser" to online data analysis for guests, who do not have a DESY AFS account.
87
88 \\
89
90 === Software and Administration ===
91
92 For online or quasi online data analysis the usual set of scientific **software** is installed on hasfumidaq: Python with all the number crunching libraries, Java, HDFView in a current version, h5tools, Matlab R2013b and the DOOCS tools and libraries. If you need more, talk to the admins.
93
94 **Administrators** of {{code language="none"}}hasfumidaq{{/code}} are the staff at FS-EC (Blume, Fleck, Rothkirch, Sendel), the staff at IT (Flemming et al.), the CAMP staff (Passow, Erk) and the IT people at FS-FL (Düsterer, Grunewald).
95
96 \\
97
98 == Maxwell HPC Cluster ==
99
100 The Maxwell HPC cluster consists of a set of 64 core servers with 512 GByte of RAM each. They run a GNU/Linux of (% class="twikiNewLink" %)CentOS(%%) Linux 7 (Core) flavour. You login to **max-fsc**{{code language="none"}}{{/code}} and are connected to the server with lowest load. Please see [[the docs on confluence for details~[~[image:url:http://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif~|~|width="13" height="12"~]~]>>url:https://confluence.desy.de/display/IS/Maxwell||shape="rect"]].
101
102 Your AFS home directory should be available as a link at ~~/afs/. You have an ASAP3 home directory common to all Maxwell nodes.
103
104 On all {{code language="none"}}max-fsc{{/code}} nodes you have access to [[desy.de/flash1/disk/>>url:http://desy.de/flash1/disk/||shape="rect"]]{{code language="none"}}/pnfs/{{/code}} just as on {{code language="none"}}hasfumidaq{{/code}}. For intermediate storage of large files use the {{code language="none"}}/scratch{{/code}} space local to each node.
105
106 Also the ASAP3 Core File System is mounted on all Maxwell nodes. It is accessible at /asap3/flash/gpfs~/~//data~/~/. Please refer to the [[documentation on confluence~[~[image:url:http://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif~|~|width="13" height="12"~]~]>>url:https://confluence.desy.de/display/ASAP3||shape="rect"]].
107
108 The cluster also is the location your files will be staged to, if you access them through the Gamma Portal.
109
110 \\
111
112 == Gamma Portal ==
113
114 The [[Gamma Portal~[~[image:url:http://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif~|~|width="13" height="12"~]~]>>url:https://gamma-portal.desy.de||shape="rect"]] is a web front end to DESY's dCache tape archive.
115
116 Your data are migrated upon request to the responsible beam line scientist to the tape archive. The archive is organised along beamtime application numbers and provides elaborate access control.
117
118 You access your data using your DOOR account. The web front end allows you to download your data. Also, if you have a DESY AFS account, it permits you to stage the data. That means a copy of your data is created on the PNFS disk for analysis on the Petra III workgroup cluster.
119
120 \\
121
122 == Flash Control System ==
123
124 In addition to the services outlined above, DESY's group MCS-4 provides data acquisition services related to accelerator operations and beam diagnostics. For details you are referred to Stefan's [[doc:FLASHUSER.Data Acquisition and controls.Data Access at FLASH (DAQ, gpfs,\.\.\.).Offline data analysis (DAQ).WebHome]] pages.
125
126 For online DAQ access and analysis you login at {{code language="none"}}flashlxuser1{{/code}} or {{code language="none"}}flashlxuser2{{/code}}, using one of the functional accounts.