Skip to content

Remote access

This tutorial concerns connecting and running SOLPS-ITER on remote servers (IPP Prague, Marconi Gateway and IPP Garching), and transferring SOLPS runs between cases and servers.

Connection to IPP Prague (COMPASS)

First, a word about COMPASS servers. There are three servers available for remote connection: ltserv (in versions ltserv3 and ltserv4 as of November 2020), Abacus and Soroban. Ltserv is suited for everyday use, such as viewing files, writing theses and using PowerKey (a tool which lets you check your daily attendance to the institute). Abacus and Soroban are powerful clusters (Abacus being the older one) which are suited for demanding scientific calculations, such as SOLPS-ITER simulations.

The central SOLPS-ITER installation is located on one of the Soroban nodes. Users are encouraged to make their installations there as well. Subsequently, it is good practice to create symbolic links to your installation from all the other nodes (and soroban-front-01) so that it can be accessed from them. This is because the Soroban computational power should be tapped from the Soroban front, which decides which node to use for your job. The central installation can, for instance, be found in /net/soroban-front-01/scratch/solps/solps-iter. This will also work from Abacus, but not from ltserv.

There are several ways to connect to our institute remotely. Let's name SSH connection, SSHFS, VPN and X2Go.

SSH

SSH connection is the basic, fastest and often the most useful way to access COMPASS servers. It is also the default way to run SOLPS-ITER from anywhere - home, IPP, workstation or laptop. SSH allows you to open a command line on the remote computer via the following command:

ssh -X username@soroban.tok.ipp.cas.cz
This will take you to the Soroban front, the "front door". soroban may be replaced by another server name: ltserv3, ltserv4, abacus, soroban-node-01 etc. If you are in the internal IPP network (working directly at IPP or using VPN or X2Go), you may leave out the tok.ipp.cas.cz part.

To run SOLPS-ITER at IPP Prague via SSH:

  1. Log into the Soroban front and ask for some computational power from the resource management system.

    ssh -X username@soroban
    qsub -IX
    

    The -I means you are starting an interactive session. The -X means, as usual, that Matplotlib plots and similar graphics are going to be tunneled to your screen. In an interactive session, it is assumed you are mostly going to mess around, make cheap simulations with b2run and look at the results. If you want to submit longer or more numerous simulations, use sorobansubmit instead and don't bother initiating qsub -I.

  2. Go to the SOLPS-ITER installation directory you wish to use (also called $SOLPSTOP in the SOLPS-ITER jargon):

    cd /net/soroban-node-06/scratch/jirakova/solps/solps-iter
    

    This, in particular, is the directory of my personal SOLPS-ITER installation at the Soroban node 6.

  3. Activate the SOLPS-ITER work environment. Switch to the tcsh shell, load the environment variables and commands and tell SOLPS (in particular DivGeo, Carre and others mesh-building software) which machine you're going to simulate.

    tcsh
    source setup.csh
    setenv DEVICE compass
    

Now you're good to go.

SSHFS

Sometimes you'll only need to access the files of SOLPS-ITER (viewing the manual, copying the equilibrium file etc.) in a file browser. To this end, the SSHFS command is ideal.

sshfs username@soroban.tok.ipp.cas.cz:/ /path/to/mount/point
It mounts the Soroban cluster to your local directory structure, enabling you to access the remote files just like any file on your computer. You'll find the central installation $SOLPSTOP directory in /path/to/mount/point/scratch/solps/solps-iter/. After your work is done, don't forget to dismount:
fusermount -u /path/to/mount/point

VPN

VPN (Virtual Private Network) is the highly recommended way to connect to IPP network from the outside. Combined with SSH and SSHFS, it allows for a stable and secure connection.

To set up a VPN, connect to ltserv3 (for instance using SSHFS) and go to the folder /compass/home/username/Public. There should be a file called username.tar.gz. Copy it to your local device and extract the files to some location where you will find them easily (and will not delete them by accident). Then either follow Stanislav Tokoš's VPN setup tutorial or, on Linux, do the following:

  1. Check your version of OpenVPN.

       openvpn --version
    

    If the version is above 2.4.4 and the OpenSSL version is above 1.0.2m, you're good to go. If it isn't, you'll have trouble. In that case:

    1. Add the current OpenVPN repository among your repositories.

      sudo -s
      wget -O - https://swupdate.openvpn.net/repos/repo-public.gpg%7Capt-key add -
      echo "deb http://build.openvpn.net/debian/openvpn/stable xenial main" > /etc/apt/sources.list.d/openvpn-aptrepo.list
      
    2. Download the newest package version.

      apt-get update
      apt-get upgrade
      
    3. Restart your computer.

    4. Check the version of OpenVPN again; now it's hopefully better.

  2. Create a text file where you copy-pasted the preferred VPN configuration from the tutorial. Replace the {{your_name}} in the file with your username.

  3. Open the Network Manager and add a VPN connection, choosing to import the settings from a file. Choose the file you've just created.
  4. Call the VPN whatever you like.
  5. As the username and password, write your LDAP username and password.

To connect to the VPN, right-click the Connections icon on the right of your control panel and find VPN connections. You will know that your VPN is working when:

  1. You can access WebCDB without typing your username and password.
  2. You can log into Abacus via ssh username@abacus, leaving out the tok.ipp.cas.cz part.
  3. Your internet works like usual.

Unfortunately, this type of VPN configuration won't allow you to access subscription-based journal articles, such as those on IoP Science, but you can still log in via Shibboleth with your VERSO credentials. If that doesn't work, there's always SciHub.

X2Go

The X2Go software allows you to open a remote desktop of your IPP Prague workspace, emulating work on a workstation in slightly worse graphics quality. This is useful, for instance, when you need to quickly access internal webpages (such as, until March 2020, the COMPASS wiki). To set up X2Go, follow the COMPASS wiki tutorial.

Resource management system

Abacus and Soroban use the resource management system qsub, which tries to allocate computational power to individual users and processes as efficiently as possible. A tutorial can be found on the COMPASS wiki. To put it in layman's terms, if you know how long your programme will run and how much computational power it will require, you can let the resource management system know what to expect and it will split the work efficiently between the processors. To submit SOLPS-ITER simulations, use the sorobansubmit command.

Another use of qsub for SOLPS-ITER modellers is, say, when one wishes to compare SOLPS-ITER and experimental results using an interactive Python shell. In this case, use qsub in the interactive mode:

qsub -IX
Then, run Python of whichever interactive program you like. End the job with the exit command. To check the current usage of Soroban, open an internet browser (on your workstation, through X2Go or with VPN turned on) and write
soroban:44444
into the address bar. Refer to Running SOLPS-ITER: sorobansubmit for other ways to check on the resource management system.

Connection to Marconi Gateway

The Marconi Gateway (also referred to as ITM or Gateway or Marconi) is a computing cluster connected to but somewhat independent of the Marconi Fusion cluster in Italy. (Independent in the sense that access to one doesn't grant you access to the other.) During the October 2018 SOLPS-ITER introduction course, we were encouraged to create an account there to run and install SOLPS-ITER. Now that SOLPS-ITER can be run on COMPASS (and perhaps IT4I), this is somewhat out-of-date. It can still be used as back-up, though.

To access Marconi Gateway, first of all you need login credentials. Follow these instructions from the EuroFusion website and expect a few days delay.

Now that you can log in, you can explore all the possible connection types. Here I list SSH, because it's quick and painless, and NX via NoMachine, because it's what I use.

Connection via SSH

SSH access is better if all you need is the command line:

ssh -X username@login.eufus.eu
This can be supplemented by mounting the remote drive to your file system:
sshfs username@login.eufus.eu:/ '/mount/point/'

NX client

An NX client is the advanced, harder-to-set-up but convenient-to-use way, the recommended one being NoMachine. Sometimes it gives me trouble, but when it works, it's super fast and convenient. Its setup described here is basically a mirror of the official setup tutorial.

To use NoMachine, first download and install it however you must on your operating system. Then start the programme, click through whichever messages you need to click through, and click on "New" on the top right bar.

Protocol: SSH
Ideally, you'd want to use the NX protocol because it's faster, but lately (winter 2019) NX has been kind of broken for Gateway and only SSH works for me.
Host: s53.eufus.eu
Port: 22
Choosing any of the s51-s53 login nodes will allow your sessions to persist, meaning when you shut NoMachine down, your working environment will continue to exist and you'll be able to connect to it again. Which is extremely neat. To a naive beginner user, the port doesn't particularly matter. Leave whatever's suggested.
How do you want to authenticate on the host? Use the system login.
We'll be signing in with username and password.
Choose which authentication method you want to use. Password.
Since NoMachine will remember your password, this is sufficient. Note though, you won't be able to extract it back from NoMachine, so always keep a copy of your password.
Use a HTTP proxy for the network connection. Don't use a proxy.
Name: Gateway
(Or whatever you like.)

This concludes the connection set-up. Double-click the connection to open it. When you do this for the first time, you will be asked for username and password. Write them and choose "Save this password in the connection file". Congratulations, you're all set up! To start using SOLPS-ITER, open the command line on the remote desktop, go to your SOLPS-ITER installation directory (or install SOLPS-ITER as described in the installation tutorials) and initialise the SOLPS-ITER work environment (described in the same document).

Connection to IPP Garching (ASDEX-Upgrade)

Why would you want to connect to the IPP Garching computing clusters? Before summer 2020, the answer was simple: the SOLPSpy package, which to this day remains the best option for importing SOLPS-ITER results to Python. At that time I believed that SOLPSpy couldn't be run outside IPP Garching, so I ran SOLPS-ITER on Marconi Gateway, copied the files to IPP Garching, exported them to Python-readable form using SOLPSpy, copied that to IPP Prague and finally checked if the code results agree with the COMPASS experimental data. If it sounds complicated, it was. Thank God I have since figured out that SOLPSpy could be run at IPP Prague as well. (The tutorial on SOLPSpy installation and usage is here.) Consequently, this section remains as dead wood among the tutorials. Hopefully it will be useful to someone one day.

IPP Garching has a number of clusters, and not all were born equal. According to their cluster guidelines, to process the SOLPS-ITER output the TOK-I cluster is the go-to option.

Note that your home folder has limited disk space (50 GB), which means you can't copy SOLPS-ITER runs there. As described here, an ideal place for SOLPS-ITER runs is the /toks/scratch/username.

There are basically two ways to access the IPP Garching computing infrastructure:

  1. SSH. This is the efficient and easy option, but it doesn't fare so well when you want visuals; pictures, windows etc.
  2. VPN + remote desktop. This takes longer to set up, but if you can't get by with a bare command line (like me), you'll appreciate this option more.

I should also point out that an active VPN connection is necessary for a number of tasks within the IPP network, such as opening its github pages or connecting to the remote desktop.

Access to the AUG intranet

In all of the above cases, you need access to the AUG intranet first. If you have visited AUG, you may already have an account there. If not, write to Marion Berger (sekmst@ipp.mpg.de), the MST secretary at AUG and ask her to confirm your request for a "real" user ID at the AUG computer system. You can submit the request here. Fill in "Marion Berger" as your supervisor/project manager. (I'm assuming that if you're working with SOLPS, you are part of MST1.)

Access to the TOK clusters

Login rights to the TOK clusters (TOK-I, TOK-S, TOK-P... ) usually don't come automatically with intranet access. To get access, write to David Coster (David.Coster@ipp.mpg.de).

SSH connection

All you need in this case is a command line. To log into the Solaris environment:

ssh -X username@sxbl16.aug.ipp.mpg.de
This is the environment you access when you log into a work station at AUG. According to Lisa Sytova, the only thing the Solaris environment is good for is cview, a programmed for viewing experimental data. However, to run Python, it's much better to switch into the Linux shell in one of the TOK clusters:
rlogin tok01
Alternatively, if you don't need the Solaris roundabout, you can log in directly to the TOK-I cluster:
ssh -X your\_username@toki01.bc.rzg.mpg.de
If you want to access the files directly from your machine, you can also mount the file system:
sshfs -o transform\_symlinks your\_username@toki01.bc.rzg.mpg.de:/ '/path/to/mount/point'

VPN connection

  1. Go to the AUG computer network webpage and follow the instructions. To install the downloaded .sh file on Linux:

    cd /path/to/the/sh/file
    chmod +x anyconnect-linux64-4.7.03052-core-vpn-webdeploy-k9.sh
    sudo ./anyconnect-linux64-4.7.03052-core-vpn-webdeploy-k9.sh
    
  2. Run Cisco AnyConnect on your machine. Enter vpn.mpcdf.mpg.de as the server and your AUG intranet login. Click Connect. (It will ask for your password on every startup. I haven't yet found a way to make it remember me.)

Remote desktop

You have two options: one that is great (or "adequate" depending on your standards) and one that sucks (by all standards). The first one the ThinLinc Client, the other is the Oracle Virtual Desktop. Both of these links are exhaustive installation tutorials. In the case of the ThinLinc Client though, just be careful to download the ThinLinc Client and not the server. In my case the client didn't show up among installed applications right away, so I ran it from the command line:

tlclient
There are many reasons why the Oracle Virtual Desktop sucks. It doggedly thinks that your keyboard has the weird layout used in AUG work stations. It doesn't support copy-paste between the remote desktop and your own desktop. It has Solaris, which is ugly. All in all, the only reason I write about it here is that it is an option.

Note, again, that to connect to the remote desktop, you need to have an active VPN connection first.

Transferring SOLPS-ITER runs

It is frequently the case that you run or post-process SOLPS on different servers, for instance at the Soroban cluster (located at IPP Prague), at IT4I and at the Marconi Gateway. This section details how to transfer SOLPS data on several levels: inside a case, inside a SOLPS installation, between SOLPS installation, and between servers.

Note: The SOLPS command transfer_solps-iter_runs is ignored here, as it requires the sshpass command. sshpass isn't pre-installed on Gateway nor on Soroban, and installing it requires either superuser rights (which you might not have), or bugging the IT department (which you might not want).

To clear up the terminology:

  • A server refers to a computer cluster with a common file system. Examples of servers are Soroban at IPP Prague, Karolína at IT4I, or Gateway at Marconi Fusion.

  • A SOLPS installation is a directory where the SOLPS-ITER source code has been cloned and compiled. Several independent SOLPS installations may coexist on a single server (e.g. different versions of SOLPS), each with a separate work environment. Within this work environment, the top directory is called $SOLPSTOP.

  • A case is a directory located in $SOLPSTOP/runs/, containing a baserun directory and probably a few directories called run, run2, diverged_run_wtf or cflme_0.3_cflmi_0.2. The individual runs share computational grid specified in the baserun.

  • A run is a specific SOLPS-ITER simulation, with its own boundary conditions, plasma solution, run.log etc.

Copy a run inside the same case

Make a copy with the cp command.

    cp -r run new_run

Transfer the plasma solution from one case to another

If both the cases have the same B2.5 cell count, copy the converged b2fstate as the new b2fstati.

    cp case1/run/b2fstate case2/run/b2fstati

This does not transfer the entire solution, but it does capture its essential part. Use this, for instance, to supply a new case with some starting realistic plasma profiles instead of the default flat profiles solution. Even if the simulation is completely different, this will reduce the time to convergence.

This can also be done when case2 contains additional ion species, such as sputtered carbon. SOLPS will take the existing solution (background deuterium plasma) and start up the remaining species from the flat profiles solution.

The plasma solution can also be transferred among different SOLPS-ITER installations, given that the version is not too dissimilar. Use the scp command (see below) to transfer the b2fstate file among different servers. In both cases, it is required that the target case already built up (e.g. ready for running save for an initial plasma state) and has the same B2.5 cell count. (This is a compelling reason why, in the absence of grid issues, you should build all your simulations with the same number of cells.)

If the two cases have different B2.5 cell count (for instance when running the same simulation on a finer grid), use the b2yt command. Refer to section 3.14 of the manual for instructions; I've never used it.

Transfer an entire case from one SOLPS installation to another

This is more complicated, as SOLPS-ITER doesn't store all the case-relevant files within the case directory. Files needed by B2.5 may be located in $SOLPSTOP/modules/Carre/, files needed by EIRENE may be located in $SOLPSTOP/modules/Triang/ and so on. Jan Hečko has developed the following tutorial:

  1. Download the script baserun2archive.py to the server where your original simulation was performed.

  2. Open a terminal on the origin server and run the script:

    cd /download/path
    python3 ./baserun2archive.py /solps/installation/path/runs/case/baserun baserun.tar.gz
    
  3. Copy the archived baserun to the target server.

    scp baserun.tar.gz user@target-server.com:/solps/installation/path/runs/
    
  4. At the origin server, go to the case directory and archive the run directory of your choice.

    cd  /solps/installation/path/runs/case/
    tar -czvf run.tar.gz run
    
  5. Copy the archived run to the target server.

    scp run.tar.gz user@target-server.com:/solps/installation/path/runs/
    
  6. Switch to the target server command line.

    ssh -X user@target-server.com
    
  7. Initiate the SOLPS work environment.

    cd /solps/installation/path
    tcsh
    source setup.csh
    setenv DEVICE compass #or your machine of choice
    
  8. Create a new case directory and move both the archives there.

    cd runs
    mkdir transferred_case
    mv baserun.tar.gz transferred_case/baserun.tar.gz
    mv run.tar.gz transferred_case/run.tar.gz
    
  9. Extract both the archives.

    cd transferred_case
    tar -xvf baserun.tar.gz
    tar -xvf run.tar.gz
    
  10. Link up the DivGeo file (the file in baserun ending with .dg) and EIRENE links in baserun.

    cd baserun
    lns <DivGeo_file_name_without_the_.dg_extension>
    setup_baserun_eirene_links
    
  11. Correct the baserun time stamps.

    cd ..
    correct_baserun_timestamps
    
  12. Set up EIRENE links to baserun from run.

    cd run
    setup_baserun_eirene_links
    
  13. Correct the run time stamps.

    correct_b2yt_timestamps
    
  14. Perform a dry run and check which routines would be called.

    b2run -n b2mn | grep -oE 'b2[a-z]+.exe' | uniq
    

    If b2ai (initial plasma solution) is among the lines, watch out, the simulation is about to ignore your b2fstati and start from the flat profiles solution.

  15. Set the simulation time ('b2mndr_elapsed') in b2mn.dat to 60 seconds and restart the simulation.

    #Restart the simulation
    rm b2mn.prt
    cp b2fstate b2fstati
    
    #Run the simulation for 60 s
    b2run b2mn > & run.log &
    
  16. Wait and check the results.

    #Plot residuals of the continuity equation for all ion species
    resco
    
    #Plot time evolution of the outer midplane electron density
    2dt nesepm
    

    If everything works, you have successfully transferred your simulation.

Using rsync to copy run data

If you don't intend to run the simulation on the target server, you can use the rsync command to simply transfer the files:

rsync -hivaz username1@source.com:/path/to/case username2@destination.com:/path/to_case
This will synchronise all of the runs going from the last modification date, so it won't copy old untouched runs over and over again.