job_container/tmpfs is an optional plugin that provides job-specific, private temporary file system space.

When enabled on the cluster, a filesytem namespace will be created for each job with a unique, private instance of /tmp and /dev/shm for the job to use. These directories can be changed with the Dirs= option in job_container.conf. The contents of these directories will be removed at job termination.


This plugin is built and installed as part of the default build, no extra installation steps are required.


Slurm must be configured to load the job container plugin by adding JobContainerType=job_container/tmpfs and PrologFlags=contain in slurm.conf. Additional configuration must be done in the job_container.conf file, which should be placed in the same directory as slurm.conf.

Job containers can be configured for all nodes, or for a subset of nodes. As an example, if all nodes will be configured the same way, you would put the following in your job_container.conf:


A full description of the parameters available in the job_container.conf file can be found here.

Initial Testing

An easy way to verify that the container is working is to run a job and ensure that the /tmp directory is empty (since it normally has some other files) and that "." is owned by the user that submitted the job.

tim@slurm-ctld:~$ srun ls -al /tmp
total 8
drwx------  2 tim    root 4096 Feb 10 17:14 .
drwxr-xr-x 21 root   root 4096 Nov 15 08:46 ..

While a job is running, root should be able to confirm that /$BasePath/$JobID/_tmp exists and is empty. This directory is bind mounted into the job. /$BasePath/$JobID should be owned by root, and is not intended to be accessible to the user.


This plugin interfaces with the SPANK api, and automatically joins the job's container in the following functions:

  • spank_task_init_privileged()
  • spank_task_init()

In addition to the job itself, the TaskProlog will also be executed inside the container.

Last modified 29 November 2023