Installing SOLPS-ITER in a container
In its essence, a container is a tiny virtual computer which contains nothing but the software you wish to run (here SOLPS-ITER) and the libraries you need to run it, exactly in the form and version you need. Like a parasite, it uses another machine's computational power to run its software, while being completely isolated from the mother organism save for CPU power. It is the perfect microcosm to nourish the beast which is SOLPS-ITER.
Proper container definition
A container is a virtual Operating System (OS). It is isolated from the host OS, but it uses the host OS's kernel, i.e. the guardian of the hardware components. It is a form of virtualization, but much leaner than an actual virtual machine. In case of Linux, the OS part (= the specific Linux distribution) is nothing more than bunch of system executables, libraries, and configurations in /bin, /lib, /etc and so on. Container is a process like any other, but it is started with a "different view" of the filesystem, where the root / directory points to the prepared container image, which can be based on an entirely different Linux distribution.
Our in-house expert on SOLPS containers is Jan Hečko (mail hecko@ipp.cas.cz), author of the package SOLPS-containeropen_in_new.
Alternative sources
Our tutorials are just dumbed-down versions of the official SOLPS-containeropen_in_new READMEs. They are the primary, up-to-date reference material.
There are two options of using Honza's SOLPS container:
- Use a pre-compiled, read-only container. This will free you of installing SOLPS-ITER, but it will also confine you to a list of specific versions. As of January 2026, this list includes 3.0.9 ("narrow grids" master), 3.1.1 (9-point-stencil) and 3.2.0-alpha (wide grids).
- Compile your own container. This will give you more freedom of which SOLPS version to use and allow you to adjust your code, but it will open you up to possible compilation errors.
Use a pre-installed, read-only container
The following instructions are specific to IPP Prague (Sorobanopen_in_new cluster)
For generic instructions, refer to SOLPS-containeropen_in_new READMEs. Katka keeps these tutorials here because she isn't sure how to disentangle the generic and Soroban-specific details, and she has a hard time following the official documentation.
A read-only SOLPS installation is perfect if you want to get a quick start in using SOLPS and don't plan to tweak the source code. To check which SOLPS versions are available for usage on Soroban:
module load solps-container/ #now hit Tab!
solps-container/3.0.8 solps-container/userdefined
solps-container/3.0.9 solps-container/wg-3.2.0-alpha
solps-container/3.1.1
As the read-only designation suggests, the entire $SOLPSTOP directory is immutable. This means your run data (mesh files and the $SOLPSTOP/runs folder) has to be stored elsewhere. By default, this is in your COMPASS home directory. More info below.
Get started with a read-only SOLPS container
-
Log into the Soroban front.
ssh -X username@soroban.tok.ipp.cas.cz -
In a Soroban node with available computational power (see soroban:44444open_in_new), create an interactive QSUB job, so that you have your own computational space.
qsub -IX -l select=1:host=soroban-node-03Launching the container from
soroban-front-01will result in theIllegal instructionerror, as detailed in the SOLPS-container READMEopen_in_new. -
Choose a version of the write-only container.
module load solps-container/3.0.9 -
Change the working directory to
/net.cd /netThis is so that this directory is mounted by the SOLPS container, providing you access to everything inside. For each node, the directory
/net/soroban-node-XX/scratch/$USER/liveis where your simulations, submitted with the QSUB script, will be actually run. Access to them will give you the option to run mid-simulation diagnostics (2dt nesepm) or to interrupt the simulation prematurely (touch b2mn.exe.dir/.quit). -
Open a command line inside the container.
solps_tcshYou are now inside a tiny Linux computer with SOLPS (and not much more) installed on it. You are in the
/opt/solps-iterdirectory, where SOLPS-ITER is installed. The SOLPS-ITER work environment is activated (tcshandsource setup.cshhave been run). You can use SOLPS commands and launch short simulations, but to launch longer simulations, switch to Soroban front and use the job submission system.
Features of a SOLPS container
-
Inside the container, your COMPASS home directory is mounted and ready to contain your runs. Go there and create a place for your SOLPS runs.
cd /compass/home/$USER/ mkdir solps_runsWhen you launch runs in the future, it will be from this directory. No worries, the container knows where its SOLPS is located, even if you aren't launching
b2runfrom inside$SOLPSTOP.Storing your run data inside your home directory, as compared to the
/scratchdirectory, has one advantage and one disadvantage. On the plus side, the run data is backed up regularly, so if you delete something by accident, it can be retrieved. On the minus side, access to the home directory while running SOLPS from the Docker container is slow, and it will slow your runs down. The/scratchdirectory is not backed up, but it is designed for fast access. A middle ground is trodden in Honza's example QSUB scriptopen_in_new for submitting simulations from the read-only SOLPS container, where at the start of the simulation the files are copied to/scratch, and at the end of it, they are copied back to your home directory. -
Inside the container, the current working directory, where you launched the container from, is mounted as well. You will find it under the same absolute path as in the Soroban node you launched the container from. This is handy when submitting simulations using Honza's example QSUB scriptopen_in_new, which copies the simulation into the Soroban node's
/scratch/$USER/livedirectory. If you change the working directory prior to launching the container...qsub -IX -l select=1:host=a-random-free-soroban-node module load solps-container/3.0.9 cd /net solps_tcsh...then the directories
/net/soroban-node-XX/scratch/$USER/live, where XX runs from 01 to 06, will also be accessible from inside the SOLPS work environment. (If you don't see them using thelscommand, don't worry. Justcdinto the node you want, and the symbolic link will be created on the fly.) This will enable you to use mid-simulation commands such as2dt nesepm, no matter which node the simulation is run on. -
When you create grids with DivGeo, Carre and Triang, by default the data is saved not inside the read-only SOLPS installation, but to your home directory. Check them out:
cd /compass/home/$USER/.solps_container -
LaTEX is not installed in the container, because it's additional 5-10 GB. As a result, the manual is not compiled in the
$SOLPSTOP/docs/solps/directory. Access the latest version of the manual at the ITER Sharepointopen_in_new. -
If you work with a particular container a lot, consider loading the module automatically by placing this line into the
.bashrcfile:module load solps-container/3.0.9 #or any other version you preferIf you're using a manually compiled SOLPS container, replace this with:
module load solps-container/userdefined export SOLPS_CONTAINER=/path/to/your/container/
Submit simulations from a SOLPS container with QSUB
This tutorial should work for both a read-only or a manually installed SOLPS container, but it has not been tested for the latter.
-
Log into the Soroban front.
ssh -X username@soroban.tok.ipp.cas.czThis time, stay at the Soroban front and don't log into a particular node. Also, don't enter an interactive QSUB job, as you can't submit QSUB jobs from inside a QSUB job.
-
Load the appropriate container module.
module load solps-container/3.0.9 -
Go to the
rundirectory you would like to run.cd /compass/home/$USER/solps_runs/case/run -
Make sure the case is ready to run: choose the appropriate
b2fstatiand check the desired run length inb2mn.dat(using the elapsed real timeb2mndr_elapsedis my personal favourite). The QSUB script will take can ofb2mn.prt, setting up baserun EIRENE links and correcting time stamps (to make sure yourb2fstatiisn't ignored in favour of flat profiles). -
Launch a SOLPS simulation using a QSUB script.
# Honza's example QSUB script qsub -N "short_job_name_without_spaces" -V $(which solps_qsub_soroban) # Your copy of this script, see below qsub -N "short_job_name_without_spaces" -V /path/to/scriptHonza's example QSUB scriptopen_in_new will launch the simulation on a free Soroban node, in the
longqueue, using 1 CPU (without MPI parallelisation). See options of the QSUB script below. While the example script can be used out-of-the-box, I would still recommend that you make your own copy, see below. -
Check that your job has been submitted with
qstat -u $USERThe number at the beginning of the line is the job ID number, used in manipulating the queue (e.g. while deleting a job, see sorobansubmit). In a minute, you will also see the job at soroban:44444open_in_new.
-
At the end of the simulation, the script will automatically perform
b2run b2ufto generate an up-to-dateb2fplasmffile.
Checking simulation progress
At the start of the simulation, Honza's script will copy the run folder to the appropriate Soroban node's /scratch directory. Consequently, the run folder in your home is not updated as the simulation progresses. To check on the simulation progress (2dt nesepm etc.), visit the directory:
cd /net/soroban-node-XX/scratch/$USER/live/<jobid>/<scenario_name>/run_01
If such a directory does not exist within your SOLPS work environment, you did not follow the advice to launch solps_tcsh from /net. Exit the SOLPS work environment, switch the working directory here, and run solps_tcsh again.
Options of the QSUB script include:
-
Launch the simulation with MPI parallelisation (here 8 cores):
NPROC=8 qsub -V -l select=1:ncpus=8 -N "short_name" /path/to/script -
Launch the simulation on a particular Soroban node:
qsub -V -l select=1:host=soroban-node-03 -N "short_name" /path/to/script -
Put the simulation into a different queue than
long.qsub -V -q medium -N "short_name" /path/to/scriptThis is useful if there are a lot of jobs queued and your simulation will be short (less than 24 hours for the
mediumqueue). The available queues arelong,mediumandbatch. -
These options can be combined. For example,
-l select=1:ncpus=8:host=soroban-node-03.
Making your own copy of Honza's example QSUB script:
-
Find the example script source code.
which solps_qsub_soroban -
Make a copy and save it somewhere you'll find it later.
-
Specify your email in the script header.
#PBS -M hromasova@ipp.cas.czThis will make the PBS queue email you whenever a simulation concludes. When a simulation diverges and ends early, the email will notify you. It will also remind you which simulations you were running yesterday.
-
If you typically run simulation "until the next day", change the default queue to
medium.#PBS -q medium
Build your own SOLPS container
The following instructions are specific to IPP Prague (Sorobanopen_in_new cluster)
For generic instructions, refer to SOLPS-containeropen_in_new READMEs. Katka keeps these tutorials here because she isn't sure how to disentangle the generic and Soroban-specific details, and she has a hard time following the official documentation.
This option is for those who need to adjust the SOLPS source code, or who need a particular version of SOLPS-ITER. It is more time-intensive than using a pre-compiled SOLPS container, but it allows the full flexibility of your own SOLPS installation while avoiding the library dependency problem. The only major difference from the read-only container is that you can keep your mesh and run files inside the SOLPS installation.
-
Log into the Soroban front.
ssh -X username@soroban.tok.ipp.cas.cz if needed -
In a somewhat free Soroban node (check soroban:44444open_in_new), create an interactive QSUB job, so you have your own computational space. Do not use
soroban-node-01.qsub -IX -l select=1:host=soroban-node-03 -
Load the general SOLPS-container module.
module load solps-container/userdefined -
Check which containers Honza has prepared.
find $SOLPS_IMAGES # /ansys/fast/solps/solps_images/singularity/solps-iter@3.0.8.sif # /ansys/fast/solps/solps_images/singularity/solps-iter@3.1.0.sif # ... -
Build a container in your directory of choice, for example
/scratch/$USER/solps-3.0.8-container. Do not access another Soroban node through/net.singularity build --sandbox /scratch/jirakova/solps-3.0.8-container /ansys/fast/solps/solps_images/singularity/solps-iter@3.0.8-29-gf89a6a23.sifThe process should take only a few minutes. If it's taking hours, you're probably writing to another Soroban node or the Soroban front.
-
Create a mount point for your home directory inside the container.
mkdir -p /scratch/jirakova/solps-docker$HOMEThis will not only allow you to store your runs in your home directory (where they are backed up), but it will also link the SOLPS installation to
~/.ssh, where your SSH keys are very probably located. This will allow you to use SOLPS Git without needing to punch in a username and password every time. -
If you like, you can now move the entire container (here called
solps-3.0.8-container) around. You can put it on another Soroban node. You can send it to the EUROfusion Gateway. The process will take a while, though; the entire pre-compiled SOLPS is pretty big. If you move the container outside Soroban, be aware that the modulesolps-container/userdefinedand its handy commands such assolps_tcshwill become unavailable. In that case, refer to the original SOLPS-container packageopen_in_new for instructions. -
Tell the shell where to look for your SOLPS container.
export SOLPS_CONTAINER=/scratch/jirakova/solps-3.0.8-container -
Open the SOLPS-ITER work environment inside the container.
solps_tcshYou are now inside a fully functional installation of SOLPS-ITER. Refer to the read-only container tutorials for instructions how to work with it.
Switch to another SOLPS-ITER version
-
Log into the Soroban node where your SOLPS container is located and start an interactive QSUB job there.
qsub -IX -l select=1:host=soroban-node-03 -
Load the SOLPS-container module and define the path to your container.
module load solps-container/userdefined export SOLPS_CONTAINER=/path/to/container/ -
Open the SOLPS-ITER work environment.
solps_tcsh -
Tell Git to ignore the files
default_compilerandwhereami. Otherwise, they will cause conflicts in pulling a different SOLPS version.git checkout default_compiler whereami -
Pull the desired SOLPS-ITER version using Git. To switch to the current master version:
git checkout master git pullTo switch to a particular version:
git checkout 3.0.7-41-g0c21b66 cd modules/B2.5 git checkout 3.0.7-42-g5fe4159 cd ../Eirene git checkout 3.0.7-26-g9be0d5f cd ../Carre git checkout 3.0.7-5-gef3eaef cd ../DivGeo git checkout 3.0.7-6-g594e4d6Refer to Questions and answers: How do I install a particular version of SOLPS-ITER? on where to find the particular version numbers.
-
Build up the correct files
default_compilerandwhereamiagain./opt/fix_whereami.sh . -
Exit the SOLPS-ITER work environment and enter it again. This will ensure the up-to-date
source.cshis loaded.exit solps_tcsh -
Compile SOLPS-ITER using the
makecommand (gmake, which is everywhere in the manual, doesn't work here for some reason).make -j 8If you get LaTEX errors about special characters in German names, don't mind them. They will not prevent SOLPS or the manual from compiling. If the compilation fails at some point, exit the SOLPS-ITER work environment, enter it again, and continue the compilation.