Remote access
This tutorial concerns connecting and running SOLPS-ITER on remote servers (IPP Prague, Marconi Gateway and IPP Garching), and transferring SOLPS runs between cases and servers.
Basic techniques
There are two main, general ways to connect to remote servers:
- SSH (supplemented by SSHFS)
- Remote desktop
Which method you'll grow to prefer depends on how you interact with SOLPS-ITER. If you work mainly through the command line, you may appreciate the baggage-free SSH access. If you work with a graphical interface (like the SOLPS GUIopen_in_new), you may appreciate the full desktop access of the remote desktop. In general, set them both up and see which one you like better. You may find yourself alternating between them depending on what you need to do.
SSH (Secure SHell)
According to Wikipediaopen_in_new, the Secure Shell Protocol (SSH Protocol) is a cryptographic network protocol for operating network services securely over an unsecured network. How I (a dummy) understand it: You know how you can open a terminal/command lineopen_in_new on your computer and command it to do all sorts of wild shit? SSH lets you open a command line on a remote server. And then you can do all sorts of wild shit there! Like run SOLPS-ITER!
To open a remote command line using SSH, open a command line at your home computer and run:
ssh -X username@remote.server.eu
To break down that command:
sshis a command to use the SSH protocol.-Xis graphics forwarding. If the remote command line attempts to open a graphics window (e.g. draw a picture withb2plot), the same window will open on your local machine as well. It's useful to have-Xon pretty much all the time.usernameis your username on the remote server, which was set up when you got your account there (together with a password).remote.server.euis the remote server address. It usually begins with the server name (likesoroban-node-06) and ends with some sort of internet address (liketok.ipp.cas.cz).
SSH is secure because you must prove it's you before it lets you put in any commands. Two methods of authentication are common:
- Username + password. After typing the
sshcommand above, you will be prompted to punch in the password. - SSH certificatesopen_in_new. This is what you typically set up when you get tired of typing your password again and again. SSH certificates come in pairs: a public key and a private key, both long sequences of letters and numbers. The public key you upload to the remote server, the private key you keep to yourself. When you initiate the SSH connection, the remote server takes a look if any of its public keys match your private keys. If a match is found, the remote server concludes it must be you after all and doesn't ask you for a password anymore. Use Google and common sense to set up SSH certificate authentication.
SSHFS (SSH FileSystem)
According to Wikipediaopen_in_new, SSHFS (SSH Filesystem) is a filesystem client to mount and interact with directories and files located on a remote server or workstation over a normal ssh connection. How I (a dummy) understand it: SSHFS lets you browse folders and files on a remote server like you would at your own computer. It's useful if you come from a Windows background and you can't get used to interacting with files through the SSH command line.
To mount a remote filesystem to your computer:
sshfs username@remote.server.eu:/ /path/to/mount/point
:. Mount the folder directly and not a link to it to avoid symlink problems.
After your work is done, don't forget to dismount:
fusermount -u /path/to/mount/point
SCP (Secure CoPy)
Remote desktop
According to Wikipediaopen_in_new, remote desktop software is software for remote administration of computers, allowing a desktop environment to be displayed on a computer, known as the client, other than the one on which it is running − the server. How I (a dummy) understand it: a remote desktop is a window straight to the desktop of a remote computer. If you maximize the window, you can pretend you're working straight at your work station (with some lagging and bad graphics).
There are many softwares running a remote desktop, and each server tends to have a favourite one (X2Go on IPP Prague, NoMachine on EUROfusion Gateway...). See the individual tutorials for setup instructions.
Connection to IPP Prague (COMPASS)
First, a word about COMPASS servers. There are three servers available for remote connection: ltserv (in versions ltserv3 and ltserv4 as of November 2020), Abacus and Soroban. Ltserv is suited for everyday use, such as viewing files, writing theses and using PowerKey (a tool which lets you check your daily attendance to the institute). Abacus and Soroban are powerful clusters (Abacus being the older one) which are suited for demanding scientific calculations, such as SOLPS-ITER simulations.
The central SOLPS-ITER installation is located on one of the Soroban nodes. Users are encouraged to make their installations there as well. Subsequently, it is good practice to create symbolic links to your installation from all the other nodes (and soroban-front-01) so that it can be accessed from them. This is because the Soroban computational power should be tapped from the Soroban front, which decides which node to use for your job. The central installation can, for instance, be found in /net/soroban-front-01/scratch/solps/solps-iter. This will also work from Abacus, but not from ltserv.
There are several ways to connect to our institute remotely. Let's name SSH connection, SSHFS, VPN and X2Go.
SSH
SSH connection is the basic, fastest and often the most useful way to access COMPASS servers. It is also the default way to run SOLPS-ITER from anywhere - home, IPP, workstation or laptop. SSH allows you to open a command line on the remote computer via the following command:
ssh -X username@soroban.tok.ipp.cas.cz
soroban may be replaced by another server name: ltserv3, ltserv4, abacus, soroban-node-01 etc. If you are in the internal IPP network (working directly at IPP or using VPN or X2Go), you may leave out the tok.ipp.cas.cz part.
To run SOLPS-ITER at IPP Prague via SSH:
-
Log into the Soroban front and ask for some computational power from the resource management system.
ssh -X username@soroban qsub -IXThe
-Imeans you are starting an interactive session. The-Xmeans, as usual, that Matplotlib plots and similar graphics are going to be tunneled to your screen. In an interactive session, it is assumed you are mostly going to mess around, make cheap simulations withb2runand look at the results. If you want to submit longer or more numerous simulations, usesorobansubmitinstead and don't bother initiatingqsub -I. -
Go to the SOLPS-ITER installation directory you wish to use (also called
$SOLPSTOPin the SOLPS-ITER jargon):cd /net/soroban-node-06/scratch/jirakova/solps/solps-iterThis, in particular, is the directory of my personal SOLPS-ITER installation at the Soroban node 6.
-
Activate the SOLPS-ITER work environment. Switch to the
tcshshell, load the environment variables and commands and tell SOLPS (in particular DivGeo, Carre and others mesh-building software) which machine you're going to simulate.tcsh source setup.csh setenv DEVICE compass
Now you're good to go.
SSHFS
Sometimes you'll only need to access the files of SOLPS-ITER (viewing the manual, copying the equilibrium file etc.) in a file browser. To this end, the SSHFS command is ideal.
sshfs username@soroban.tok.ipp.cas.cz:/ /path/to/mount/point
$SOLPSTOP directory in /path/to/mount/point/scratch/solps/solps-iter/. After your work is done, don't forget to dismount:
fusermount -u /path/to/mount/point
VPN
VPN (Virtual Private Network) is the highly recommended way to connect to IPP network from the outside. Combined with SSH and SSHFS, it allows for a stable and secure connection.
To set up a VPN, connect to ltserv3 (for instance using SSHFS) and go to the folder /compass/home/username/Public. There should be a file called username.tar.gz. Copy it to your local device and extract the files to some location where you will find them easily (and will not delete them by accident). Then either follow Stanislav Tokoš's VPN setup tutorial or, on Linux, do the following:
-
Check your version of OpenVPN.
openvpn --versionIf the version is above 2.4.4 and the OpenSSL version is above 1.0.2m, you're good to go. If it isn't, you'll have trouble. In that case:
-
Add the current OpenVPN repository among your repositories.
sudo -s wget -O - https://swupdate.openvpn.net/repos/repo-public.gpg%7Capt-key add - echo "deb http://build.openvpn.net/debian/openvpn/stable xenial main" > /etc/apt/sources.list.d/openvpn-aptrepo.list -
Download the newest package version.
apt-get update apt-get upgrade -
Restart your computer.
-
Check the version of OpenVPN again; now it's hopefully better.
-
-
Create a text file where you copy-pasted the preferred VPN configuration from the tutorial. Replace the
{{your_name}}in the file with your username. - Open the Network Manager and add a VPN connection, choosing to import the settings from a file. Choose the file you've just created.
- Call the VPN whatever you like.
- As the username and password, write your LDAP username and password.
To connect to the VPN, right-click the Connections icon on the right of your control panel and find VPN connections. You will know that your VPN is working when:
- You can access WebCDB without typing your username and password.
- You can log into Abacus via
ssh username@abacus, leaving out thetok.ipp.cas.czpart. - Your internet works like usual.
Unfortunately, this type of VPN configuration won't allow you to access subscription-based journal articles, such as those on IoP Science, but you can still log in via Shibboleth with your VERSO credentials. If that doesn't work, there's always SciHub.
X2Go
The X2Go software allows you to open a remote desktop of your IPP Prague workspace, emulating work on a workstation in slightly worse graphics quality. This is useful, for instance, when you need to quickly access internal webpages (such as, until March 2020, the COMPASS wiki). To set up X2Go, follow the COMPASS wiki tutorial.
Resource management system
Abacus and Soroban use the resource management system qsub, which tries to allocate computational power to individual users and processes as efficiently as possible. A tutorial can be found on the COMPASS wiki. To put it in layman's terms, if you know how long your programme will run and how much computational power it will require, you can let the resource management system know what to expect and it will split the work efficiently between the processors. To submit SOLPS-ITER simulations, use the sorobansubmit command.
Another use of qsub for SOLPS-ITER modellers is, say, when one wishes to compare SOLPS-ITER and experimental results using an interactive Python shell. In this case, use qsub in the interactive mode:
qsub -IX
exit command. To check the current usage of Soroban, open an internet browser (on your workstation, through X2Go or with VPN turned on) and write
soroban:44444
Connection to EUROfusion Gateway
EUROfusion Gateway open_in_new (EFGW) is a high-performance computing cluster hosted by HPC-CINECA in Italy. Before 2025, it was known among SOLPS users as the Marconi Gateway, Gateway, Marconi or ITM. It is an accessible platform to run SOLPS, although it has chronic problems with its installation and support.
Tip
Use Docker containers to bypass SOLPS installation.
Initial setup
To access EFGW, you need two independent accounts, registered under the same e-mail address.
-
Account at the CINECA UserDB portal. This serves for CINECA to verify your identity and affiliation, down to a scan of your personal ID and the brand of jeans you're currently wearing.
-
HPC-CINECA project account. This serves to track your allocated computing resources and the projects (like "SOLPS modelling") you're working on.
If you had access to the Marconi Gateway before its rebranding, you already have an HPC-CINECA project account. If you want to keep using EFGW, you must register a UserDB account as well.
Instructions and links:
- HPC-CINECA documentation is a general, detailed guide to accessing CINECA clusters (Leonardo, G100, Piragora...). Don't be discouraged that you don't see EFGW anywhere; it's listed in the section Specific users - EUROfusion.
- EFGW-specific documentation amends the general tutorials with EFGW-specific information. Mostly it's "replace
leonardowithefgwin the command". - UserDB portal is where you log in using your UserDB credentials.
- CINECA keycloak is where you log in using your HPC-CINECA project account credentials.
- FAQ contain, among others, a solution to the
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!error message. It doesn't work for me, sadly.
How to connect to EFGW
HPC-CINECA has such strict security that you can't simply log in with a username and password. You must use SSH certificates, which are valid only for 12 hours, so you have to re-generate one every day using 2-factor authentication with your smartphone. Additionally, there is a fingerprint issue described below which I wasn't able to fix despite attentive tech support.
Prior to the day's first connection to EFGW, re-generate an SSH certificate with the Smallstep client:
step ssh login 'username' --provisioner efgw
SSH in the command line: Official directions are:
ssh -X username@login.eufus.eu
login.eufus.eu is a front-end, from which you will be redirected to one of the login nodes (vizXX-ext.efgw.cineca.it where XX runs from 05 to 08) depending on which is the least busy. However, this command only works the first time. After you add the login.eufus.eu fingerprint among your known hosts, next time you log in, you'll get the error WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!. Despite being specifically treated in the FAQ, I haven't been able to fix this. I propose two workarounds:
- Connect to a specific node each time.
ssh -X username@viz05-ext.efgw.cineca.it - Remove
login.eufus.eufrom the list of known hosts prior to every login.ssh-keygen -f '~/.ssh/known_hosts' -R 'login.eufus.eu'
NoMachine virtual desktop: Follow CINECA's setup instructions. You may have to upgrade to the latest version of NoMachine. When it works, accessing EFGW through virtual desktops is super fast and convenient.
Connection to IPP Garching (ASDEX-Upgrade)
Why would you want to connect to the IPP Garching computing clusters? Before summer 2020, the answer was simple: the SOLPSpy package, which to this day remains the best option for importing SOLPS-ITER results to Python. At that time I believed that SOLPSpy couldn't be run outside IPP Garching, so I ran SOLPS-ITER on Marconi Gateway, copied the files to IPP Garching, exported them to Python-readable form using SOLPSpy, copied that to IPP Prague and finally checked if the code results agree with the COMPASS experimental data. If it sounds complicated, it was. Thank God I have since figured out that SOLPSpy could be run at IPP Prague as well. (The tutorial on SOLPSpy installation and usage is here.) Consequently, this section remains as dead wood among the tutorials. Hopefully it will be useful to someone one day.
IPP Garching has a number of clusters, and not all were born equal. According to their cluster guidelines, to process the SOLPS-ITER output the TOK-I cluster is the go-to option.
Note that your home folder has limited disk space (50 GB), which means you can't copy SOLPS-ITER runs there. As described here, an ideal place for SOLPS-ITER runs is the /toks/scratch/username.
There are basically two ways to access the IPP Garching computing infrastructure:
- SSH. This is the efficient and easy option, but it doesn't fare so well when you want visuals; pictures, windows etc.
- VPN + remote desktop. This takes longer to set up, but if you can't get by with a bare command line (like me), you'll appreciate this option more.
I should also point out that an active VPN connection is necessary for a number of tasks within the IPP network, such as opening its github pages or connecting to the remote desktop.
Access to the AUG intranet
In all of the above cases, you need access to the AUG intranet first. If you have visited AUG, you may already have an account there. If not, write to Marion Berger (sekmst@ipp.mpg.de), the MST secretary at AUG and ask her to confirm your request for a "real" user ID at the AUG computer system. You can submit the request here. Fill in "Marion Berger" as your supervisor/project manager. (I'm assuming that if you're working with SOLPS, you are part of MST1.)
Access to the TOK clusters
Login rights to the TOK clusters (TOK-I, TOK-S, TOK-P... ) usually don't come automatically with intranet access. To get access, write to David Coster (David.Coster@ipp.mpg.de).
SSH connection
All you need in this case is a command line. To log into the Solaris environment:
ssh -X username@sxbl16.aug.ipp.mpg.de
cview, a programmed for viewing experimental data. However, to run Python, it's much better to switch into the Linux shell in one of the TOK clusters:
rlogin tok01
ssh -X your\_username@toki01.bc.rzg.mpg.de
sshfs -o transform\_symlinks your\_username@toki01.bc.rzg.mpg.de:/ '/path/to/mount/point'
VPN connection
-
Go to the AUG computer network webpage and follow the instructions. To install the downloaded
.shfile on Linux:cd /path/to/the/sh/file chmod +x anyconnect-linux64-4.7.03052-core-vpn-webdeploy-k9.sh sudo ./anyconnect-linux64-4.7.03052-core-vpn-webdeploy-k9.sh -
Run Cisco AnyConnect on your machine. Enter
vpn.mpcdf.mpg.deas the server and your AUG intranet login. Click Connect. (It will ask for your password on every startup. I haven't yet found a way to make it remember me.)
Remote desktop
You have two options: one that is great (or "adequate" depending on your standards) and one that sucks (by all standards). The first one the ThinLinc Client, the other is the Oracle Virtual Desktop. Both of these links are exhaustive installation tutorials. In the case of the ThinLinc Client though, just be careful to download the ThinLinc Client and not the server. In my case the client didn't show up among installed applications right away, so I ran it from the command line:
tlclient
Note, again, that to connect to the remote desktop, you need to have an active VPN connection first.
Transferring SOLPS-ITER runs
It is frequently the case that you run or post-process SOLPS on different servers, for instance at the Soroban cluster (located at IPP Prague), at IT4I and at the Marconi Gateway. This section details how to transfer SOLPS data on several levels: inside a case, inside a SOLPS installation, between SOLPS installation, and between servers.
Note: The SOLPS command transfer_solps-iter_runs is ignored here, as it requires the sshpass command. sshpass isn't pre-installed on Gateway nor on Soroban, and installing it requires either superuser rights (which you might not have), or bugging the IT department (which you might not want).
To clear up the terminology:
-
A server refers to a computer cluster with a common file system. Examples of servers are Soroban at IPP Prague, Karolína at IT4I, or Gateway at Marconi Fusion.
-
A SOLPS installation is a directory where the SOLPS-ITER source code has been cloned and compiled. Several independent SOLPS installations may coexist on a single server (e.g. different versions of SOLPS), each with a separate work environment. Within this work environment, the top directory is called
$SOLPSTOP. -
A case is a directory located in
$SOLPSTOP/runs/, containing abaserundirectory and probably a few directories calledrun,run2,diverged_run_wtforcflme_0.3_cflmi_0.2. The individual runs share computational grid specified in the baserun. -
A run is a specific SOLPS-ITER simulation, with its own boundary conditions, plasma solution,
run.logetc.
Copy a run inside the same case
Make a copy with the cp command.
cp -r run new_run
Transfer the plasma solution from one case to another
If both the cases have the same B2.5 cell count, copy the converged b2fstate as the new b2fstati.
cp case1/run/b2fstate case2/run/b2fstati
This does not transfer the entire solution, but it does capture its essential part. Use this, for instance, to supply a new case with some starting realistic plasma profiles instead of the default flat profiles solution. Even if the simulation is completely different, this will reduce the time to convergence.
This can also be done when case2 contains additional ion species, such as sputtered carbon. SOLPS will take the existing solution (background deuterium plasma) and start up the remaining species from the flat profiles solution.
The plasma solution can also be transferred among different SOLPS-ITER installations, given that the version is not too dissimilar. Use the scp command (see below) to transfer the b2fstate file among different servers. In both cases, it is required that the target case already built up (e.g. ready for running save for an initial plasma state) and has the same B2.5 cell count. (This is a compelling reason why, in the absence of grid issues, you should build all your simulations with the same number of cells.)
If the two cases have different B2.5 cell count (for instance when running the same simulation on a finer grid), use the b2yt command. Refer to section 3.14 of the manual for instructions; I've never used it.
Transfer an entire case from one SOLPS installation to another
This is more complicated, as SOLPS-ITER doesn't store all the case-relevant files within the case directory. Files needed by B2.5 may be located in $SOLPSTOP/modules/Carre/, files needed by EIRENE may be located in $SOLPSTOP/modules/Triang/ and so on. Jan Hečko has developed the following tutorial:
-
Download the script baserun2archive.py to the server where your original simulation was performed.
-
Open a terminal on the origin server and run the script:
cd /download/path python3 ./baserun2archive.py /solps/installation/path/runs/case/baserun baserun.tar.gz -
Copy the archived
baserunto the target server.scp baserun.tar.gz user@target-server.com:/solps/installation/path/runs/ -
At the origin server, go to the case directory and archive the
rundirectory of your choice.cd /solps/installation/path/runs/case/ tar -czvf run.tar.gz run -
Copy the archived
runto the target server.scp run.tar.gz user@target-server.com:/solps/installation/path/runs/ -
Switch to the target server command line.
ssh -X user@target-server.com -
Initiate the SOLPS work environment.
cd /solps/installation/path tcsh source setup.csh setenv DEVICE compass #or your machine of choice -
Create a new case directory and move both the archives there.
cd runs mkdir transferred_case mv baserun.tar.gz transferred_case/baserun.tar.gz mv run.tar.gz transferred_case/run.tar.gz -
Extract both the archives.
cd transferred_case tar -xvf baserun.tar.gz tar -xvf run.tar.gz -
Link up the DivGeo file (the file in baserun ending with
.dg) and EIRENE links inbaserun.cd baserun lns <DivGeo_file_name_without_the_.dg_extension> setup_baserun_eirene_links -
Correct the
baseruntime stamps.cd .. correct_baserun_timestamps -
Set up EIRENE links to
baserunfromrun.cd run setup_baserun_eirene_links -
Correct the
runtime stamps.correct_b2yt_timestamps -
Perform a dry run and check which routines would be called.
b2run -n b2mn | grep -oE 'b2[a-z]+.exe' | uniqIf
b2ai(initial plasma solution) is among the lines, watch out, the simulation is about to ignore yourb2fstatiand start from the flat profiles solution. -
Set the simulation time (
'b2mndr_elapsed') inb2mn.datto 60 seconds and restart the simulation.#Restart the simulation rm b2mn.prt cp b2fstate b2fstati #Run the simulation for 60 s b2run b2mn > & run.log & -
Wait and check the results.
#Plot residuals of the continuity equation for all ion species resco #Plot time evolution of the outer midplane electron density 2dt nesepmIf everything works, you have successfully transferred your simulation.
Using rsync to copy run data
If you don't intend to run the simulation on the target server, you can use the rsync command to simply transfer the files:
rsync -hivaz username1@source.com:/path/to/case username2@destination.com:/path/to_case