Working on Maxwell
Maxwell Resources
The Maxwell Hardware is partitioned in different partions which will be upgraded constantly. The concept is that all integrated hardware is available via the ALL partition (ALLGPU for nodes with integrated GPU), but the job will be terminated when a job with a higher priority (from the nodes owner) comes in. Dedicated to Photon Science are the PS (internal users) and PSX (external users) partitions. Regarding GPUs (Mai 2019) the PS partition has nodes with a single NVIDEA Tesla P100 while the PSX partition has no GPUs at all.
For interactive code development (including 4 NVIDIA Quadro M6000 ) and running short jobs (as shared resource) the Maxwell Display Server is foreseen. Dedicated nodes will be available via Slurm Jobs which are not meant to work with a GUI (output to file).
Software on Maxwell is managed via the Module system. So check
module avail
for all available software and
moduel load *
e.g
module load matlab/R2019a
to install the listed software.
Access Points
The most convenient and officially preferred way is via the Maxwell Display Server via FastX2 client or web browser. Required ressources will be automatically given to new Scientific User Accounts and can be requested for existing accounts via FS-EC.
Photon Science staff can also connect to the work group server max-fsc.desy.de or - if you need a GPU for your calculations - max-fsg.desy.de. External Users can also SSH to desy-ps-cpu.desy.de or - if you need a GPU for your calculations - desy-ps-gpu.desy.de from inside the DESY network. From outside they have to connect to max-display.desy.de first.
For working with Python's Jupyter Notebooks DESY provides a Jupyterhub instance and its resource is automatically allocated to new Scientific User Accounts.
During the beamtime
For online processing during a beamtime it is possible to make a node reservation via maxwell(at)service, which have to be done 1 to 2 weeks before the beamtime. At the point when you start the beamtime in the GPFS all user should have a DOOR account, be rgeistered as participants for the beamtime and have a scientific user account to work on this node. Currently (Mai 2019) FS could also provide a node with a single P100 GPU.
Scientifc User Accounts
For working at Maxwell you need the corresponding resources which the Scientific User Account. On Maxwell you can check the resources of a User via (passing the User's id is optional):
my-resources id
When you login in Maxwell the first time a home directory with a hard quota of 20 GB is created. By default Inhouse staff will be granted the PS-resource while external users will be granted the PSX-resource.
To create a Scientific User Account the Local Contact have to request it via FS-EC while passing User's Full Name and E-mail Address. FS-EC will send a form the User have to sign and after that the Local Contact gives the User (verbally) the initial password.
SLURM scheduler
Slurm is an open source resource management and job scheduling system for Linux clusters. It schedules via a "who comes first" basis, but is using a "back-fill algorithm".
Job submission nodes are available via the load balancer: max-display, max-fsc & max-fsg.