Remote access
This tutorial concerns connecting and running SOLPS-ITER on remote servers (IPP Prague, Marconi Gateway and IPP Garching), and transferring SOLPS runs between cases and servers.
Connection to IPP Prague (COMPASS)
First, a word about COMPASS servers. There are three servers available for remote connection: ltserv (in versions ltserv3 and ltserv4 as of November 2020), Abacus and Soroban. Ltserv is suited for everyday use, such as viewing files, writing theses and using PowerKey (a tool which lets you check your daily attendance to the institute). Abacus and Soroban are powerful clusters (Abacus being the older one) which are suited for demanding scientific calculations, such as SOLPS-ITER simulations.
The central SOLPS-ITER installation is located on one of the Soroban nodes. Users are encouraged to make their installations there as well. Subsequently, it is good practice to create symbolic links to your installation from all the other nodes (and soroban-front-01
) so that it can be accessed from them. This is because the Soroban computational power should be tapped from the Soroban front, which decides which node to use for your job. The central installation can, for instance, be found in /net/soroban-front-01/scratch/solps/solps-iter
. This will also work from Abacus, but not from ltserv.
There are several ways to connect to our institute remotely. Let's name SSH connection, SSHFS, VPN and X2Go.
SSH
SSH connection is the basic, fastest and often the most useful way to access COMPASS servers. It is also the default way to run SOLPS-ITER from anywhere - home, IPP, workstation or laptop. SSH allows you to open a command line on the remote computer via the following command:
ssh -X username@soroban.tok.ipp.cas.cz
soroban
may be replaced by another server name: ltserv3
, ltserv4
, abacus
, soroban-node-01
etc. If you are in the internal IPP network (working directly at IPP or using VPN or X2Go), you may leave out the tok.ipp.cas.cz
part.
To run SOLPS-ITER at IPP Prague via SSH:
-
Log into the Soroban front and ask for some computational power from the resource management system.
ssh -X username@soroban qsub -IX
The
-I
means you are starting an interactive session. The-X
means, as usual, that Matplotlib plots and similar graphics are going to be tunneled to your screen. In an interactive session, it is assumed you are mostly going to mess around, make cheap simulations withb2run
and look at the results. If you want to submit longer or more numerous simulations, usesorobansubmit
instead and don't bother initiatingqsub -I
. -
Go to the SOLPS-ITER installation directory you wish to use (also called
$SOLPSTOP
in the SOLPS-ITER jargon):cd /net/soroban-node-06/scratch/jirakova/solps/solps-iter
This, in particular, is the directory of my personal SOLPS-ITER installation at the Soroban node 6.
-
Activate the SOLPS-ITER work environment. Switch to the
tcsh
shell, load the environment variables and commands and tell SOLPS (in particular DivGeo, Carre and others mesh-building software) which machine you're going to simulate.tcsh source setup.csh setenv DEVICE compass
Now you're good to go.
SSHFS
Sometimes you'll only need to access the files of SOLPS-ITER (viewing the manual, copying the equilibrium file etc.) in a file browser. To this end, the SSHFS command is ideal.
sshfs username@soroban.tok.ipp.cas.cz:/ /path/to/mount/point
$SOLPSTOP
directory in /path/to/mount/point/scratch/solps/solps-iter/
. After your work is done, don't forget to dismount:
fusermount -u /path/to/mount/point
VPN
VPN (Virtual Private Network) is the highly recommended way to connect to IPP network from the outside. Combined with SSH and SSHFS, it allows for a stable and secure connection.
To set up a VPN, connect to ltserv3 (for instance using SSHFS) and go to the folder /compass/home/username/Public
. There should be a file called username.tar.gz
. Copy it to your local device and extract the files to some location where you will find them easily (and will not delete them by accident). Then either follow Stanislav Tokoš's VPN setup tutorial or, on Linux, do the following:
-
Check your version of OpenVPN.
openvpn --version
If the version is above 2.4.4 and the OpenSSL version is above 1.0.2m, you're good to go. If it isn't, you'll have trouble. In that case:
-
Add the current OpenVPN repository among your repositories.
sudo -s wget -O - https://swupdate.openvpn.net/repos/repo-public.gpg%7Capt-key add - echo "deb http://build.openvpn.net/debian/openvpn/stable xenial main" > /etc/apt/sources.list.d/openvpn-aptrepo.list
-
Download the newest package version.
apt-get update apt-get upgrade
-
Restart your computer.
-
Check the version of OpenVPN again; now it's hopefully better.
-
-
Create a text file where you copy-pasted the preferred VPN configuration from the tutorial. Replace the
{{your_name}}
in the file with your username. - Open the Network Manager and add a VPN connection, choosing to import the settings from a file. Choose the file you've just created.
- Call the VPN whatever you like.
- As the username and password, write your LDAP username and password.
To connect to the VPN, right-click the Connections icon on the right of your control panel and find VPN connections. You will know that your VPN is working when:
- You can access WebCDB without typing your username and password.
- You can log into Abacus via
ssh username@abacus
, leaving out thetok.ipp.cas.cz
part. - Your internet works like usual.
Unfortunately, this type of VPN configuration won't allow you to access subscription-based journal articles, such as those on IoP Science, but you can still log in via Shibboleth with your VERSO credentials. If that doesn't work, there's always SciHub.
X2Go
The X2Go software allows you to open a remote desktop of your IPP Prague workspace, emulating work on a workstation in slightly worse graphics quality. This is useful, for instance, when you need to quickly access internal webpages (such as, until March 2020, the COMPASS wiki). To set up X2Go, follow the COMPASS wiki tutorial.
Resource management system
Abacus and Soroban use the resource management system qsub
, which tries to allocate computational power to individual users and processes as efficiently as possible. A tutorial can be found on the COMPASS wiki. To put it in layman's terms, if you know how long your programme will run and how much computational power it will require, you can let the resource management system know what to expect and it will split the work efficiently between the processors. To submit SOLPS-ITER simulations, use the sorobansubmit
command.
Another use of qsub
for SOLPS-ITER modellers is, say, when one wishes to compare SOLPS-ITER and experimental results using an interactive Python shell. In this case, use qsub
in the interactive mode:
qsub -IX
exit
command. To check the current usage of Soroban, open an internet browser (on your workstation, through X2Go or with VPN turned on) and write
soroban:44444
Connection to Marconi Gateway
The Marconi Gateway (also referred to as ITM or Gateway or Marconi) is a computing cluster connected to but somewhat independent of the Marconi Fusion cluster in Italy. (Independent in the sense that access to one doesn't grant you access to the other.) During the October 2018 SOLPS-ITER introduction course, we were encouraged to create an account there to run and install SOLPS-ITER. Now that SOLPS-ITER can be run on COMPASS (and perhaps IT4I), this is somewhat out-of-date. It can still be used as back-up, though.
To access Marconi Gateway, first of all you need login credentials. Follow these instructions from the EuroFusion website and expect a few days delay.
Now that you can log in, you can explore all the possible connection types. Here I list SSH, because it's quick and painless, and NX via NoMachine, because it's what I use.
Connection via SSH
SSH access is better if all you need is the command line:
ssh -X username@login.eufus.eu
sshfs username@login.eufus.eu:/ '/mount/point/'
NX client
An NX client is the advanced, harder-to-set-up but convenient-to-use way, the recommended one being NoMachine. Sometimes it gives me trouble, but when it works, it's super fast and convenient. Its setup described here is basically a mirror of the official setup tutorial.
To use NoMachine, first download and install it however you must on your operating system. Then start the programme, click through whichever messages you need to click through, and click on "New" on the top right bar.
Protocol: SSH
Host: s53.eufus.eu
Port: 22
How do you want to authenticate on the host? Use the system login.
Choose which authentication method you want to use. Password.
Use a HTTP proxy for the network connection. Don't use a proxy.
Name: Gateway
This concludes the connection set-up. Double-click the connection to open it. When you do this for the first time, you will be asked for username and password. Write them and choose "Save this password in the connection file". Congratulations, you're all set up! To start using SOLPS-ITER, open the command line on the remote desktop, go to your SOLPS-ITER installation directory (or install SOLPS-ITER as described in the installation tutorials) and initialise the SOLPS-ITER work environment (described in the same document).
Connection to IPP Garching (ASDEX-Upgrade)
Why would you want to connect to the IPP Garching computing clusters? Before summer 2020, the answer was simple: the SOLPSpy package, which to this day remains the best option for importing SOLPS-ITER results to Python. At that time I believed that SOLPSpy couldn't be run outside IPP Garching, so I ran SOLPS-ITER on Marconi Gateway, copied the files to IPP Garching, exported them to Python-readable form using SOLPSpy, copied that to IPP Prague and finally checked if the code results agree with the COMPASS experimental data. If it sounds complicated, it was. Thank God I have since figured out that SOLPSpy could be run at IPP Prague as well. (The tutorial on SOLPSpy installation and usage is here.) Consequently, this section remains as dead wood among the tutorials. Hopefully it will be useful to someone one day.
IPP Garching has a number of clusters, and not all were born equal. According to their cluster guidelines, to process the SOLPS-ITER output the TOK-I cluster is the go-to option.
Note that your home folder has limited disk space (50 GB), which means you can't copy SOLPS-ITER runs there. As described here, an ideal place for SOLPS-ITER runs is the /toks/scratch/username
.
There are basically two ways to access the IPP Garching computing infrastructure:
- SSH. This is the efficient and easy option, but it doesn't fare so well when you want visuals; pictures, windows etc.
- VPN + remote desktop. This takes longer to set up, but if you can't get by with a bare command line (like me), you'll appreciate this option more.
I should also point out that an active VPN connection is necessary for a number of tasks within the IPP network, such as opening its github pages or connecting to the remote desktop.
Access to the AUG intranet
In all of the above cases, you need access to the AUG intranet first. If you have visited AUG, you may already have an account there. If not, write to Marion Berger (sekmst@ipp.mpg.de), the MST secretary at AUG and ask her to confirm your request for a "real" user ID at the AUG computer system. You can submit the request here. Fill in "Marion Berger" as your supervisor/project manager. (I'm assuming that if you're working with SOLPS, you are part of MST1.)
Access to the TOK clusters
Login rights to the TOK clusters (TOK-I, TOK-S, TOK-P... ) usually don't come automatically with intranet access. To get access, write to David Coster (David.Coster@ipp.mpg.de).
SSH connection
All you need in this case is a command line. To log into the Solaris environment:
ssh -X username@sxbl16.aug.ipp.mpg.de
cview
, a programmed for viewing experimental data. However, to run Python, it's much better to switch into the Linux shell in one of the TOK clusters:
rlogin tok01
ssh -X your\_username@toki01.bc.rzg.mpg.de
sshfs -o transform\_symlinks your\_username@toki01.bc.rzg.mpg.de:/ '/path/to/mount/point'
VPN connection
-
Go to the AUG computer network webpage and follow the instructions. To install the downloaded
.sh
file on Linux:cd /path/to/the/sh/file chmod +x anyconnect-linux64-4.7.03052-core-vpn-webdeploy-k9.sh sudo ./anyconnect-linux64-4.7.03052-core-vpn-webdeploy-k9.sh
-
Run Cisco AnyConnect on your machine. Enter
vpn.mpcdf.mpg.de
as the server and your AUG intranet login. Click Connect. (It will ask for your password on every startup. I haven't yet found a way to make it remember me.)
Remote desktop
You have two options: one that is great (or "adequate" depending on your standards) and one that sucks (by all standards). The first one the ThinLinc Client, the other is the Oracle Virtual Desktop. Both of these links are exhaustive installation tutorials. In the case of the ThinLinc Client though, just be careful to download the ThinLinc Client and not the server. In my case the client didn't show up among installed applications right away, so I ran it from the command line:
tlclient
Note, again, that to connect to the remote desktop, you need to have an active VPN connection first.
Transferring SOLPS-ITER runs
It is frequently the case that you run or post-process SOLPS on different servers, for instance at the Soroban cluster (located at IPP Prague), at IT4I and at the Marconi Gateway. This section details how to transfer SOLPS data on several levels: inside a case, inside a SOLPS installation, between SOLPS installation, and between servers.
Note: The SOLPS command transfer_solps-iter_runs
is ignored here, as it requires the sshpass
command. sshpass
isn't pre-installed on Gateway nor on Soroban, and installing it requires either superuser rights (which you might not have), or bugging the IT department (which you might not want).
To clear up the terminology:
-
A server refers to a computer cluster with a common file system. Examples of servers are Soroban at IPP Prague, Karolína at IT4I, or Gateway at Marconi Fusion.
-
A SOLPS installation is a directory where the SOLPS-ITER source code has been cloned and compiled. Several independent SOLPS installations may coexist on a single server (e.g. different versions of SOLPS), each with a separate work environment. Within this work environment, the top directory is called
$SOLPSTOP
. -
A case is a directory located in
$SOLPSTOP/runs/
, containing abaserun
directory and probably a few directories calledrun
,run2
,diverged_run_wtf
orcflme_0.3_cflmi_0.2
. The individual runs share computational grid specified in the baserun. -
A run is a specific SOLPS-ITER simulation, with its own boundary conditions, plasma solution,
run.log
etc.
Copy a run inside the same case
Make a copy with the cp
command.
cp -r run new_run
Transfer the plasma solution from one case to another
If both the cases have the same B2.5 cell count, copy the converged b2fstate as the new b2fstati.
cp case1/run/b2fstate case2/run/b2fstati
This does not transfer the entire solution, but it does capture its essential part. Use this, for instance, to supply a new case with some starting realistic plasma profiles instead of the default flat profiles solution. Even if the simulation is completely different, this will reduce the time to convergence.
This can also be done when case2
contains additional ion species, such as sputtered carbon. SOLPS will take the existing solution (background deuterium plasma) and start up the remaining species from the flat profiles solution.
The plasma solution can also be transferred among different SOLPS-ITER installations, given that the version is not too dissimilar. Use the scp
command (see below) to transfer the b2fstate
file among different servers. In both cases, it is required that the target case already built up (e.g. ready for running save for an initial plasma state) and has the same B2.5 cell count. (This is a compelling reason why, in the absence of grid issues, you should build all your simulations with the same number of cells.)
If the two cases have different B2.5 cell count (for instance when running the same simulation on a finer grid), use the b2yt
command. Refer to section 3.14 of the manual for instructions; I've never used it.
Transfer an entire case from one SOLPS installation to another
This is more complicated, as SOLPS-ITER doesn't store all the case-relevant files within the case directory. Files needed by B2.5 may be located in $SOLPSTOP/modules/Carre/
, files needed by EIRENE may be located in $SOLPSTOP/modules/Triang/
and so on. Jan Hečko has developed the following tutorial:
-
Download the script baserun2archive.py to the server where your original simulation was performed.
-
Open a terminal on the origin server and run the script:
cd /download/path python3 ./baserun2archive.py /solps/installation/path/runs/case/baserun baserun.tar.gz
-
Copy the archived
baserun
to the target server.scp baserun.tar.gz user@target-server.com:/solps/installation/path/runs/
-
At the origin server, go to the case directory and archive the
run
directory of your choice.cd /solps/installation/path/runs/case/ tar -czvf run.tar.gz run
-
Copy the archived
run
to the target server.scp run.tar.gz user@target-server.com:/solps/installation/path/runs/
-
Switch to the target server command line.
ssh -X user@target-server.com
-
Initiate the SOLPS work environment.
cd /solps/installation/path tcsh source setup.csh setenv DEVICE compass #or your machine of choice
-
Create a new case directory and move both the archives there.
cd runs mkdir transferred_case mv baserun.tar.gz transferred_case/baserun.tar.gz mv run.tar.gz transferred_case/run.tar.gz
-
Extract both the archives.
cd transferred_case tar -xvf baserun.tar.gz tar -xvf run.tar.gz
-
Link up the DivGeo file (the file in baserun ending with
.dg
) and EIRENE links inbaserun
.cd baserun lns <DivGeo_file_name_without_the_.dg_extension> setup_baserun_eirene_links
-
Correct the
baserun
time stamps.cd .. correct_baserun_timestamps
-
Set up EIRENE links to
baserun
fromrun
.cd run setup_baserun_eirene_links
-
Correct the
run
time stamps.correct_b2yt_timestamps
-
Perform a dry run and check which routines would be called.
b2run -n b2mn | grep -oE 'b2[a-z]+.exe' | uniq
If
b2ai
(initial plasma solution) is among the lines, watch out, the simulation is about to ignore yourb2fstati
and start from the flat profiles solution. -
Set the simulation time (
'b2mndr_elapsed'
) inb2mn.dat
to 60 seconds and restart the simulation.#Restart the simulation rm b2mn.prt cp b2fstate b2fstati #Run the simulation for 60 s b2run b2mn > & run.log &
-
Wait and check the results.
#Plot residuals of the continuity equation for all ion species resco #Plot time evolution of the outer midplane electron density 2dt nesepm
If everything works, you have successfully transferred your simulation.
Using rsync
to copy run data
If you don't intend to run the simulation on the target server, you can use the rsync
command to simply transfer the files:
rsync -hivaz username1@source.com:/path/to/case username2@destination.com:/path/to_case