Version 12.1 by sndueste on 2024/04/03 17:41

Show last authors
1 ===== Maxwell Resources =====
2
3 The [[Maxwell Hardware >>url:https://confluence.desy.de/display/IS/Maxwell+Hardware||shape="rect"]]is partitioned in different partions which will be upgraded constantly. The concept is that all integrated hardware is available via the {{code language="none"}}ALL{{/code}} partition ({{code language="none"}}ALLGPU{{/code}} for nodes with integrated GPU), but the job will be terminated when a job with a higher priority (from the nodes owner) comes in. Dedicated to Photon Science are the {{code language="none"}}PS{{/code}} (internal users) and {{code language="none"}}PSX{{/code}} (external users) partitions. Regarding GPUs (Mai 2019) the PS partition has nodes with a single NVIDEA Tesla P100 while the PSX partition has no GPUs at all.
4
5 For interactive code development (including 4 NVIDIA Quadro M6000 ) and running short jobs (as shared resource) the Maxwell Display Server is foreseen. Dedicated nodes will be available via [[Slurm Jobs>>url:https://confluence.desy.de/display/IS/Running+Jobs+on+Maxwell||shape="rect"]] which are not meant to work with a GUI (output to file).
6
7 Software on Maxwell is managed via the Module system. So check
8
9
10 (% class="code" %)
11 (((
12 module avail
13 )))
14
15 for all available software and
16
17
18 (% class="code" %)
19 (((
20 module load *
21 )))
22
23 e.g
24
25
26 (% class="code" %)
27 (((
28 module load matlab/R2019a
29 )))
30
31 to install the listed software.
32
33 ===== Access Points =====
34
35 The most convenient and officially preferred way is via the [[Maxwell Display Server >>url:https://max-display3.desy.de:3389||shape="rect"]]via (% class="twikiNewLink" %)FastX3(%%) client or web browser. Required resources will be automatically given to new {{code language="none"}}Scientific User Accounts{{/code}} and can be requested for existing accounts via FS-EC.
36
37 [[Photon Science staff >>url:https://confluence.desy.de/display/IS/Maxwell+for+Photon+Science||shape="rect"]]can also connect to the work group server [[max-fsc.desy.de>>url:http://max-fsc.desy.de||shape="rect"]] or - if you need a GPU for your calculations - [[max-fsg.desy.de>>url:http://max-fsg.desy.de||shape="rect"]]. [[External Users>>url:https://confluence.desy.de/display/IS/Maxwell+for+Photon+Science+users||shape="rect"]] can also SSH to [[desy-ps-cpu.desy.de>>url:http://desy-ps-cpu.desy.de||shape="rect"]]{{code language="none"}}{{/code}} or - if you need a GPU for your calculations - [[desy-ps-gpu.desy.de>>url:http://desy-ps-gpu.desy.de||shape="rect"]] from inside the DESY network. From outside they have to connect to [[max-display.desy.de>>url:http://max-display.desy.de||shape="rect"]] first.
38
39 For working with [[ Python>>doc:FLASHUSER.Data Acquisition and controls.Data Access at FLASH (DAQ, gpfs,\.\.\.).Online data analysis.Anaconda Python at FLASH.WebHome]]'s Jupyter Notebooks DESY provides a [[Jupyterhub>>url:https://confluence.desy.de/display/IS/JupyterHub+on+Maxwell||shape="rect"]] instance and its resource is automatically allocated to new Scientific User Accounts.
40
41 ===== During the beamtime =====
42
43 For online processing during a beamtime it is possible to make a node reservation via maxwell(at)service, which have to be done 1 to 2 weeks before the beamtime. At the point when you start the beamtime in the [[GPFS>>url:https://confluence.desy.de/display/ASAP3/Directory+Structure||shape="rect"]] all user should have a [[DOOR>>url:https://door.desy.de/door/||shape="rect"]] account, be rgeistered as participants for the beamtime and have a scientific user account to work on this node. Currently (Mai 2019) FS could also provide a node with a single P100 GPU.
44
45 ===== Scientifc User Accounts =====
46
47 For working at Maxwell you need the corresponding resources which the Scientific User Account. On Maxwell you can check the resources of a User via (passing the User's id is optional):
48
49
50 (% class="code" %)
51 (((
52 my-resources id
53 )))
54
55 When you login in Maxwell the first time a home directory with a hard quota of 20 GB is created. By default Inhouse staff will be granted the PS-resource while external users will be granted the PSX-resource.
56
57 To create a Scientific User Account the Local Contact have to request it via FS-EC while passing User's {{code language="none"}}Full Name{{/code}} and(% class="WYSIWYG_TT" %) E-mail Address(%%). FS-EC will send a form the User have to sign and after that the Local Contact gives the User (verbally) the initial password.
58
59 ===== SLURM scheduler =====
60
61 Slurm is an open source resource management and job scheduling system for Linux clusters. It schedules via a "who comes first" basis, but is using a "back-fill algorithm".
62
63 Job submission nodes are available via the load balancer: max-display, max-fsc & max-fsg.
64
65 [[image:attach:slurm-job-submission.png||height="250"]]
66
67
68 ====== Example: Slurm Job Submission ======
69
70 [[image:attach:slurm_job.png||height="400"]]
71
72