Wiki source code of Working on Maxwell
Version 1.1 by sendels on 2019/08/05 09:19
Show last authors
author | version | line-number | content |
---|---|---|---|
1 | ===== Maxwell Resources ===== | ||
2 | |||
3 | The [[Maxwell Hardware ~[~[image:url:http://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif~|~|width="13" height="12"~]~]>>url:https://confluence.desy.de/display/IS/Maxwell+Hardware||shape="rect"]]is partitioned in different partions which will be upgraded constantly. The concept is that all integrated hardware is available via the {{code language="none"}}ALL{{/code}} partition ({{code language="none"}}ALLGPU{{/code}} for nodes with integrated GPU), but the job will be terminated when a job with a higher priority (from the nodes owner) comes in. Dedicated to Photon Science are the {{code language="none"}}PS{{/code}} (internal users) and {{code language="none"}}PSX{{/code}} (external users) partitions. Regarding GPUs (Mai 2019) the PS partition has nodes with a single NVIDEA Tesla P100 while the PSX partition has no GPUs at all. | ||
4 | |||
5 | For interactive code development (including 4 NVIDIA Quadro M6000 ) and running short jobs (as shared resource) the Maxwell Display Server is foreseen. Dedicated nodes will be available via [[Slurm Jobs~[~[image:url:http://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif~|~|width="13" height="12"~]~]>>url:https://confluence.desy.de/display/IS/Running+Jobs+on+Maxwell||shape="rect"]] which are not meant to work with a GUI (output to file). | ||
6 | |||
7 | Software on Maxwell is managed via the Module system. So check | ||
8 | |||
9 | \\ | ||
10 | |||
11 | (% class="code" %) | ||
12 | ((( | ||
13 | module avail | ||
14 | ))) | ||
15 | |||
16 | for all available software and | ||
17 | |||
18 | \\ | ||
19 | |||
20 | (% class="code" %) | ||
21 | ((( | ||
22 | moduel load * | ||
23 | ))) | ||
24 | |||
25 | e.g | ||
26 | |||
27 | \\ | ||
28 | |||
29 | (% class="code" %) | ||
30 | ((( | ||
31 | module load matlab/R2019a | ||
32 | ))) | ||
33 | |||
34 | to install the listed software. | ||
35 | |||
36 | ===== Access Points ===== | ||
37 | |||
38 | The most convenient and officially preferred way is via the [[Maxwell Display Server~[~[image:url:http://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif~|~|width="13" height="12"~]~]>>url:https://max-display.desy.de:3443/auth/ssh||shape="rect"]] via (% class="twikiNewLink" %)[[FastX2>>url:http://hasfweb.desy.de/bin/edit/Setup/FastX2?topicparent=Setup.WorkingOnMaxwell;nowysiwyg=0||rel="nofollow" shape="rect"]](%%) client or web browser. Required ressources will be automatically given to new {{code language="none"}}Scientific User Accounts{{/code}} and can be requested for existing accounts via FS-EC. | ||
39 | |||
40 | [[Photon Science staff ~[~[image:url:http://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif~|~|width="13" height="12"~]~]>>url:https://confluence.desy.de/display/IS/Maxwell+for+Photon+Science||shape="rect"]]can also connect to the work group server [[max-fsc.desy.de>>url:http://max-fsc.desy.de||shape="rect"]] or - if you need a GPU for your calculations - [[max-fsg.desy.de>>url:http://max-fsg.desy.de||shape="rect"]]. [[External Users~[~[image:url:http://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif~|~|width="13" height="12"~]~]>>url:https://confluence.desy.de/display/IS/Maxwell+for+Photon+Science+users||shape="rect"]] can also SSH to [[desy-ps-cpu.desy.de>>url:http://desy-ps-cpu.desy.de||shape="rect"]]{{code language="none"}}{{/code}} or - if you need a GPU for your calculations - [[desy-ps-gpu.desy.de>>url:http://desy-ps-gpu.desy.de||shape="rect"]] from inside the DESY network. From outside they have to connect to [[desy-ps-ext.desy.de>>url:http://desy-ps-ext.desy.de||shape="rect"]]{{code language="none"}}{{/code}} first. | ||
41 | |||
42 | For working with [[ Python>>doc:FLASHUSER.Data Acquisition and controls.Data Access at FLASH (DAQ, gpfs,\.\.\.).Online data analysis.Anaconda Python at FLASH.WebHome]]'s Jupyter Notebooks DESY provides a [[Jupyterhub~[~[image:url:http://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif~|~|width="13" height="12"~]~]>>url:https://confluence.desy.de/display/IS/JupyterHub+on+Maxwell||shape="rect"]] instance and its resource is automatically allocated to new Scientific User Accounts. | ||
43 | |||
44 | ===== During the beamtime ===== | ||
45 | |||
46 | For online processing during a beamtime it is possible to make a node reservation via maxwell(at)service, which have to be done 1 to 2 weeks before the beamtime. At the point when you start the beamtime in the [[GPFS~[~[image:url:http://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif~|~|width="13" height="12"~]~]>>url:https://confluence.desy.de/display/ASAP3/Directory+Structure||shape="rect"]] all user should have a [[DOOR~[~[image:url:http://hasfweb.desy.de/pub/TWiki/TWikiDocGraphics/external-link.gif~|~|width="13" height="12"~]~]>>url:https://door.desy.de/door/||shape="rect"]] account, be rgeistered as participants for the beamtime and have a scientific user account to work on this node. Currently (Mai 2019) FS could also provide a node with a single P100 GPU. | ||
47 | |||
48 | ===== Scientifc User Accounts ===== | ||
49 | |||
50 | For working at Maxwell you need the corresponding resources which the Scientific User Account. On Maxwell you can check the resources of a User via (passing the User's id is optional): | ||
51 | |||
52 | \\ | ||
53 | |||
54 | (% class="code" %) | ||
55 | ((( | ||
56 | my-resources id | ||
57 | ))) | ||
58 | |||
59 | When you login in Maxwell the first time a home directory with a hard quota of 20 GB is created. By default Inhouse staff will be granted the PS-resource while external users will be granted the PSX-resource. | ||
60 | |||
61 | To create a Scientific User Account the Local Contact have to request it via FS-EC while passing User's {{code language="none"}}Full Name{{/code}} and(% class="WYSIWYG_TT" %) E-mail Address(%%). FS-EC will send a form the User have to sign and after that the Local Contact gives the User (verbally) the initial password. | ||
62 | |||
63 | ===== SLURM scheduler ===== | ||
64 | |||
65 | Slurm is an open source resource management and job scheduling system for Linux clusters. It schedules via a "who comes first" basis, but is using a "back-fill algorithm". | ||
66 | |||
67 | Job submission nodes are available via the load balancer: max-display, max-fsc & max-fsg. | ||
68 | |||
69 | [[image:url:http://hasfweb.desy.de/pub/Setup/WorkingOnMaxwell/slurm-job-submission.png||alt="slurm-job-submission.png" width="800" title="slurm-job-submission.png" height="246"]] | ||
70 | |||
71 | ====== Example: Slurm Job Submission ====== | ||
72 | |||
73 | [[image:url:http://hasfweb.desy.de/pub/Setup/WorkingOnMaxwell/slurm_job.png||alt="slurm_job.png" width="1058" title="slurm_job.png" height="595"]] |