Last modified by flenners on 2025-06-24 16:56

From version 10.1
edited by flenners
on 2021-07-16 16:15
Change comment: There is no comment for this version
To version 21.1
edited by flenners
on 2025-06-24 16:56
Change comment: There is no comment for this version

Summary

Details

Page properties
Title
... ... @@ -1,1 +1,1 @@
1 -02 - How to login to Maxwell and start RecoGUI
1 +00 - How to login to Maxwell
Content
... ... @@ -1,159 +1,214 @@
1 -= {{id name="00-HowtologintoMaxwell-ShortVersion:"/}}**Short Version: ** =
1 +The DESY has a quite powerful compute cluster called the Maxwell cluster. The documentation can be found here [[https:~~/~~/confluence.desy.de/display/MXW/Maxwell+Cluster>>doc:MXW.Maxwell Cluster.WebHome||shape="rect"]], however as this can be confusing sometimes, we will try to condensate this to a step by step manual.
2 2  
3 -Terminal:
4 4  
5 -(% class="code" %)
6 -(((
7 -salloc ~-~-partition=psx ~-~-nodes=1 –-time=06:00:00
8 -\\ssh max-bla123
9 -\\module load anaconda
10 -\\source activate ~~/envs/tomopy
11 -\\spyder&
12 -)))
13 13  
14 -\\
5 +{{toc/}}
15 15  
16 -\\
7 += {{id name="00-HowtologintoMaxwell-GettingaDESYAccount"/}}Getting a DESY Account =
17 17  
18 -{{code linenumbers="true" collapse="true"}}
19 -salloc --partition=psx --nodes=1 –-time=06:00:00
9 +During you beamtime you will encounter multiple systems, where you will need two different types of accounts:
20 20  
21 -ssh max-bla123
11 +== {{id name="00-HowtologintoMaxwell-TheDOORAccount"/}}The DOOR Account ==
22 22  
23 -module load anaconda
13 +Before you arrive you have to create a DOOR account and do all the safety trainings. This account is also being used for the gamma-portal, where you can manage you beamtime data, grant access to other users and manage FTP access. However this account does not work with other resources. For this you will have to request a second account:
24 24  
25 -source activate ~/envs/tomopy
15 +== {{id name="00-HowtologintoMaxwell-ThePSXAccount"/}}The PSX Account ==
26 26  
27 -spyder&
17 +If you decide during a beamtime, you want to have access to the cluster, tell your local contact so, and they will request a PSX account for you. With this you will get access to the Kerberos, Windows and afs resources at DESY, which includes the cluster.
18 +
19 +
20 +After you got the account, you have to change the initial password within 6 days. For this, go to [[https:~~/~~/passwd.desy.de/>>url:https://passwd.desy.de/||shape="rect"]] and log in with your user name and initial password (you do not need any OTP when you sign in for the first time). Then agree to the terms and change your password.
21 +
22 += {{id name="00-HowtologintoMaxwell-UsingtheCluster"/}}Using the Cluster =
23 +
24 +== {{id name="00-HowtologintoMaxwell-StructureoftheCluster"/}}Structure of the Cluster ==
25 +
26 +=== {{id name="00-HowtologintoMaxwell-Overview"/}}Overview ===
27 +
28 +The Maxwell Cluster has (status 2021) more than 750 nodes in it. To organize this, you cannot access any node directly, but you have to request compute resources at first. You then can connect form an entrance node to you compute node
29 +
30 +=== {{id name="00-HowtologintoMaxwell-EntranceNodes"/}}Entrance Nodes ===
31 +
32 +If you have successfully obtained an PSX account you can get started. The entrance node are:
33 +
34 +
35 +[[https:~~/~~/max-display.desy.de:3389/auth/ssh>>url:https://max-display.desy.de:3443/auth/ssh||shape="rect"]]  (in any case)
36 +
37 +These nodes are **not **for processing, as you will share them with many other users. So please do not do anything computational intensive on them, like reconstruction or visualization. Viewing images is ok.
38 +
39 +=== {{id name="00-HowtologintoMaxwell-FastX2"/}}Fast X3 ===
40 +
41 +The cluster uses the software FastX3 for connection and virtual desktop. To get the right version of this, use the web interface, log in, and in the bottom right corner is a download link for the desktop client. The version has to match exactly to work properly.
42 +
43 +If you want to add a connection in the desktop client, click the plus, select web, use the address above (including the port), and your username and force ssh authentication. Then you can choose if you want a virtual desktop (XFCE) or a terminal.
44 +
45 +=== {{id name="00-HowtologintoMaxwell-Partitions"/}}Partitions ===
46 +
47 +Starting from an entrance node, you can connect to a compute node. As there are multiple levels of priorities etc. the nodes are organizes in partitions. You can only access some of these. To view which one, open a terminal and use the commad:
48 +
49 +{{code}}
50 +my-partitions
28 28  {{/code}}
29 29  
30 -\\
53 +Your result will look something like this:
31 31  
32 -Spyder:
55 +[[image:attach:P5I.User Guide\: NanoCT.4\. Reconstruction Guide.00 - How to login to Maxwell.WebHome@image2021-5-4_10-28-14.png||queryString="version=1&modificationDate=1620116894626&api=v2"]]
33 33  
34 -Open RecoGUI,
57 +== {{id name="00-HowtologintoMaxwell-SLURM"/}}SLURM ==
35 35  
36 -(Right click on tab: "Set console working directory") (to be removed)
59 +The access to the resources of the cluster is managed via a scheduler, SLURM.
37 37  
38 -Green Arrow to start program
61 +SLURM schedules the access to nodes and can revokes access if higher priority jobs come.
39 39  
40 -\\
63 +=== {{id name="00-HowtologintoMaxwell-PSXPartition"/}}PSX Partition ===
41 41  
42 -= {{id name="00-HowtologintoMaxwell-LongVersion:"/}}**Long Version: ** =
65 +Here you cannot be kicked out of your allocation. However, only few nodes are in this partition and you can also only allocate few in parallel (2021: 5). Some of them have GPUs available.
43 43  
44 -\\
67 +=== {{id name="00-HowtologintoMaxwell-AllPartition"/}}All Partition ===
45 45  
46 -**Login to max-nova**: E.g. from browser [[https:~~/~~/max-nova.desy.de:3443/>>url:https://max-nova.desy.de:3443/auth/ssh||shape="rect"]]
69 +Very large number of nodes available and you can allocate many in parallel (2021: 100). However each allocation can be revoked without a warning if s.o. with higher priority comes. This is very common to happen. If you want to use this partition, be sure to design your job accordingly. Only CPU nodes.
47 47  
48 -\\
71 +=== {{id name="00-HowtologintoMaxwell-AllgpuPartition"/}}Allgpu Partition ===
49 49  
50 -Click on "**Launch Session**" and "**XFCE**" Icon
73 +Like all, but with GPUs
51 51  
52 -[[image:attach:image2021-4-27_13-55-52.png||height="250"]]
75 +=== {{id name="00-HowtologintoMaxwell-JhubPartition"/}}Jhub Partition ===
53 53  
54 -\\
77 +For Jupyter Hub
55 55  
56 -**Open a Terminal**, e.g. from the icon at the bottom of your desktop. You can also open it via right click → "Open Terminal here" directly on your desktop or from any folder.
57 57  
58 -[[image:attach:image2021-4-27_13-58-35.png||height="250"]]
80 +== {{id name="00-HowtologintoMaxwell-ConnectingtotheCluster"/}}Connecting to the Cluster ==
59 59  
60 -\\
82 +Connect to an entrance node via FastX. You will automatically be assigned to a node when you start a session via a load balancer (max-display001-003, max-nova001-002)
61 61  
62 -Now you can **allocate a node** for yourself, so you will have enough memory and power for your reconstruction.
84 +[[image:attach:P5I.User Guide\: NanoCT.4\. Reconstruction Guide.00 - How to login to Maxwell.WebHome@image2021-4-27_13-55-52.png||queryString="version=1&modificationDate=1619524552546&api=v2"]]
63 63  
64 -(% class="code" %)
65 -(((
66 -salloc ~-~-partition=psx ~-~-nodes=1 ~-~-time=06:00:00
67 -)))
86 +Choose a graphic interface and look around.
68 68  
69 -\\
70 70  
71 -You will get a node for 6 hours, you can also choose longer or shorter times.
89 +== {{id name="00-HowtologintoMaxwell-DataStorage"/}}Data Storage ==
72 72  
73 -It can take some time before you get a node, then it will tell you which node is reserved for you. (Example: max-exfl069)
91 +The Maxwell cluster knows many storage systems. The most important are:
74 74  
75 -\\
93 +Your User Folder: This has a hard limit of 30 GB. Be sure not to exceed this.
76 76  
77 -Now you can  **login via ssh** on this node:
95 +The GPFS: here all the beamtime data are stored.
78 78  
79 -(% class="code" %)
80 -(((
81 -ssh max-exfl069
82 -)))
97 +=== {{id name="00-HowtologintoMaxwell-GPFS"/}}GPFS ===
83 83  
84 -Enter your password.
99 +Usually you can find you data at: /asap3/petra3/gpfs/<beamline>/<year>/data/<beamtime_id>
85 85  
86 -\\
101 +In there you will find a substructure:
87 87  
88 -EXAMPLE:
103 +* raw: raw measurement data. Only applicant and beamtime leader can write/delete there
104 +* processed: for all processed data
105 +* scratch_cc: scratch folder w/o backup
106 +* shared: for everything else
89 89  
90 -[[image:attach:image2021-4-27_13-52-11.png||height="125"]]
108 +The GPFS has regular snapshots. The whole capacity of this is huge (several PB)
91 91  
92 -Hint: Please use partition=psx, if you use =all, the connection might close while you are working if someone with higher priority needs the node you are working on.
110 +== {{id name="00-HowtologintoMaxwell-HowtoGetaComputeNode"/}}How to Get a Compute Node ==
93 93  
94 -\\
112 +If you want to do some processing, there are two ways to start a job in SLURM:
95 95  
96 -Now you are on a different node [[image:http://confluence.desy.de/s/de_DE/7901/4635873c8e185dc5df37b4e2487dfbef570b5e2c/_/images/icons/emoticons/smile.svg||title="(Lächeln)" border="0" class="emoticon emoticon-smile"]].
114 +1. Interactive
115 +1. Batch
97 97  
98 -\\
117 +In both cases you are the only person working on the node, so use it as much as you like.
99 99  
100 -You first have to **load the anaconda module:**
119 +=== {{id name="00-HowtologintoMaxwell-StartinganInteractiveJob"/}}Starting an Interactive Job ===
101 101  
102 -(% class="code" %)
103 -(((
104 -module load anaconda/3
105 -\\
106 -)))
121 +To get a node you have to allocate one via SLURM e.g. use:
107 107  
108 -and **activate your virtual environment**, depending on where you installed it:
123 +{{code}}
124 +salloc -N 1 -p psx -t 1-05:00:00
125 +{{/code}}
109 109  
127 +Looking at the individual options:
128 +
129 +* salloc: specifies you want a live allocation
130 +* -N 1: for one node
131 +* -p psx: on the psx partition. You can also add multiple separated with a comma: -p psx,all
132 +* -t 1-05:00:00: for the duration of 1 day and 5h
133 +* (((
134 +Other options could be: ~-~-mem=500GB with at least 500GB of memory,
135 +
110 110  (% class="code" %)
111 111  (((
112 -source activate ~~/envs/tomopy
138 +if you need gpu: (% class="bash plain" %){{code language="none"}}--constraint=P100{{/code}}
113 113  )))
140 +)))
141 +* ... see the SLURM documentation for more options
114 114  
115 -\\
143 +If your job is scheduled you see your assigned node and can connect via ssh to it. (in the rare case where you do not see anything use my-jobs to find out the host name).
116 116  
117 -~~/ takes you back to your home directory. In this case, the environment "tomopy" was installed in the home directory in the folder "envs". 
145 +=== {{id name="00-HowtologintoMaxwell-Startingabatchjob"/}}Starting a batch job ===
118 118  
119 -\\
147 +For a batch job you need a small shell script describing what you want to do. You do not see the job directly, but the output is written to a log file (and results can be stored on disk)
120 120  
121 -now you can **start spyder**:
149 +With a batch job, you can also start an array job, where the same task is executed on multiple servers in parallel.
122 122  
123 -(% class="code" %)
124 -(((
125 -spyder&
126 -)))
151 +An example for such a script:
127 127  
128 -\\
153 +{{code}}
154 +#!/bin/bash
155 +#SBATCH --time 0-01:00:00
156 +#SBATCH --nodes 1
157 +#SBATCH --partition all,ps
158 +#SBATCH --array 1-80
159 +#SBATCH --mem 250GB
160 +#SBATCH --job-name ExampleScript
129 129  
130 -EXAMPLE: (virtual environment in "envs/p36"
131 131  
132 -[[image:attach:image2021-4-27_13-53-35.png||height="71"]]
163 +source /etc/profile.d/modules.sh
164 +echo "SLURM_JOB_ID $SLURM_JOB_ID"
165 +echo "SLURM_ARRAY_JOB_ID $SLURM_ARRAY_JOB_ID"
166 +echo "SLURM_ARRAY_TASK_ID $SLURM_ARRAY_TASK_ID"
167 +echo "SLURM_ARRAY_TASK_COUNT $SLURM_ARRAY_TASK_COUNT"
168 +echo "SLURM_ARRAY_TASK_MAX $SLURM_ARRAY_TASK_MAX"
169 +echo "SLURM_ARRAY_TASK_MIN $SLURM_ARRAY_TASK_MIN"
133 133  
134 -\\
171 +module load maxwell gcc/8.2
135 135  
136 -You can also start another terminal e.g. if you want to look at your data / reconstructions in fiji.
173 +.local/bin/ipython3 --pylab=qt5 PathToYourScript/Script.py $SLURM_ARRAY_TASK_ID
137 137  
138 -\\
175 +exit
139 139  
140 -Hint: You can check your partition via
141 141  
142 -(% class="code" %)
143 -(((
144 -my-partitions
145 -)))
178 +{{/code}}
146 146  
147 -You should have access to psx.
148 148  
149 -[[image:attach:image2021-5-4_10-28-14.png]]
181 +To run this use
150 150  
151 -\\
183 +{{code}}
184 +sbatch ./your_script.sh
185 +{{/code}}
152 152  
153 -For further information also check:
154 154  
155 -[[doc:IS.Maxwell.WebHome]]
188 +=== {{id name="00-HowtologintoMaxwell-Viewingyouallocations"/}}Viewing you allocations ===
156 156  
157 -\\
190 +To view your pending or running allocations you can use:
158 158  
159 -\\
192 +{{code}}
193 +squeue -u <username>
194 +
195 +or
196 +
197 +my-jobs
198 +{{/code}}
199 +
200 +
201 +=== {{id name="00-HowtologintoMaxwell-Whatisrealisticintermsofresources"/}}What is realistic in terms of resources ===
202 +
203 +To be fair, you will not get 100 nodes every time you want them. Especially during a user run, the machines are often quite busy. But if you design your scripts to be tolerant to sudden cancellation, it is still worth trying if you profit from massive parallelization.
204 +
205 +If you want to do some small processing, use one of the psx nodes. This should work most of the time.
206 +
207 +
208 +== {{id name="00-HowtologintoMaxwell-GrantingDataAccesstootherBeamtimes"/}}Granting Data Access to other Beamtimes ==
209 +
210 +If you have to add other users to a past beamtime, this can be done via the gamma-portal (by PI, leader or beamline scientist). After adding the accounts, these people have to make sure to log off from **all **FastX sessions, etc. to update the permissions.
211 +
212 +
213 +
214 +
image2021-4-27_13-51-12.png
Author
... ... @@ -1,0 +1,1 @@
1 +XWiki.flenners
Size
... ... @@ -1,0 +1,1 @@
1 +349.0 KB
Content
image2021-4-27_13-51-35.png
Author
... ... @@ -1,0 +1,1 @@
1 +XWiki.flenners
Size
... ... @@ -1,0 +1,1 @@
1 +349.0 KB
Content
image2021-4-27_13-52-11.png
Author
... ... @@ -1,0 +1,1 @@
1 +XWiki.flenners
Size
... ... @@ -1,0 +1,1 @@
1 +47.1 KB
Content
image2021-4-27_13-53-35.png
Author
... ... @@ -1,0 +1,1 @@
1 +XWiki.flenners
Size
... ... @@ -1,0 +1,1 @@
1 +24.7 KB
Content
image2021-4-27_13-55-52.png
Author
... ... @@ -1,0 +1,1 @@
1 +XWiki.flenners
Size
... ... @@ -1,0 +1,1 @@
1 +99.9 KB
Content
image2021-4-27_13-58-35.png
Author
... ... @@ -1,0 +1,1 @@
1 +XWiki.flenners
Size
... ... @@ -1,0 +1,1 @@
1 +285.1 KB
Content
image2021-5-4_10-27-13.png
Author
... ... @@ -1,0 +1,1 @@
1 +XWiki.flenners
Size
... ... @@ -1,0 +1,1 @@
1 +278.7 KB
Content
image2021-5-4_10-28-14.png
Author
... ... @@ -1,0 +1,1 @@
1 +XWiki.flenners
Size
... ... @@ -1,0 +1,1 @@
1 +278.8 KB
Content
Confluence.Code.ConfluencePageClass[0]
Id
... ... @@ -1,1 +1,1 @@
1 -230766014
1 +204941497
Title
... ... @@ -1,1 +1,1 @@
1 -02 - How to login to Maxwell and start RecoGUI
1 +00 - How to login to Maxwell
URL
... ... @@ -1,1 +1,1 @@
1 -https://confluence.desy.de/spaces/P5I/pages/230766014/02 - How to login to Maxwell and start RecoGUI
1 +https://confluence.desy.de/spaces/P5I/pages/204941497/00 - How to login to Maxwell