Can ABACUS 2.0 be used for handling of sensitive personal data (følsomme persondata)?
No. Data has to be desensitized by employing anonymisation prior to upload to ABACUS 2.0.
There are two categories of anonymisation:
- Anonymised – stripped of any elements that would allow identification of individuals.
- Pseudo-anonymised – individual records could be identified by authorised personnel.
ABACUS 2.0 can only be used for the first category, i.e., personal data must always be fully anonymised before it is copied to ABACUS 2.0. Adequate anonymisation involves removal of any information which identifies or could lead to the identification of individuals, including but not limited to the names, cpr numbers, addresses, etc. Encryption or hashing of identification data is not considered adequate anonymisation.
You are welcome to contact us if you have any questions.
How do I run MPI jobs?
The recommended way to run MPI jobs is to create a sbatch job script using a combination of
--ntasks-per-node to get the number of nodes and MPI ranks per node you want.
In the job script, use
srun to start the application (and not
For further information, look at our Slurm help page.
How do I specify that I require a slim node with a large local disk?
256 of the slim nodes have a 400 GB SSD disk, while the remaining 192 slim nodes have 200 GB. If your application requires a large disk, you can specify this by adding
--constraint=d400 to your sbatch command or sbatch script.
What can the frontend nodes be used for?
The frontend nodes,
feX.deic.sdu.dk, are intended to be used for transferring files in and out of ABACUS 2.0, for compiling software, and for adding sbatch jobs to the Slurm queue. You can also use it for very short test runs of your jobs to ensure that they will actually run when the sbatch script is running.
When compiling software you must ensure that you do not use too many resources (CPU, Memory) as this will affect other users using the frontend nodes, e.g., do not use
make -j24. Always use interactive jobs when running longer tests.
My slurm job never starts but stays in state AssocGrpCPUMinutesLimit
If you have used most of the node hours available for your project, your jobs might end up in the state
AssocGrpCPUMinutesLimitas shown below:
testuser@fe1:~$ squeue -u testuser JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) 654321 slim job22 testuser PD 0:00 4 (AssocGrpCPUMinutesLimit)
AssocGrpCPUMinutesLimit means that there is not enough node hours left in your Slurm account to run the job. You can check the output of
abc-quota to get the current number of available node hours.
Node that a Slurm job only starts if there are sufficient node hours left on the account for the entire job to run to completion, e.g., a three node, 24 hours job requires at least 96 node hours left on the account.