{"id":1100,"date":"2017-11-14T14:43:01","date_gmt":"2017-11-14T14:43:01","guid":{"rendered":"http:\/\/35.198.183.193\/?page_id=1100"},"modified":"2019-07-15T07:53:02","modified_gmt":"2019-07-15T07:53:02","slug":"supported-applications","status":"publish","type":"page","link":"http:\/\/escience.sdu.dk\/index.php\/supported-applications\/","title":{"rendered":"Supported applications"},"content":{"rendered":"<p>At ABACUS2.0 we maintain a short list of standard software available for our users. A few software packages are only available for some research groups.<\/p>\n<p>Users are not limited to using the software installed by us. You are welcome to install your own software either in your home directory or in your project&#8217;s <code>\/work\/project\/<\/code> folder.<\/p>\n<p>You are also welcome to contact us and we will in many cases help you with the installation. If the software is freely available, we will, in general, add the software to our software &#8220;modules&#8221;. For more information on modules, see our page specifically on&nbsp;<a href=\"\/index.php\/modules\/\">modules<\/a>.<\/p>\n<p>Many software modules are available in multiple versions. The default version is shown below. To get consistent results, you should always specify the version you want when using <code>module load<\/code> in your sbatch scripts.<\/p>\n<h4>Applications<\/h4>\n<ul class=\"nav nav-tabs\">\n<li class=\"active\"><a data-toggle=\"tab\" href=\"#Amber\">Amber<\/a><\/li>\n<li><a data-toggle=\"tab\" href=\"#Comsol\">Comsol<\/a><\/li>\n<li><a data-toggle=\"tab\" href=\"#Gaussian09\">Gaussian09<\/a><\/li>\n<li><a data-toggle=\"tab\" href=\"#Gaussview\">Gaussview<\/a><\/li>\n<li><a data-toggle=\"tab\" href=\"#Gromacs\">Gromacs<\/a><\/li>\n<li><a data-toggle=\"tab\" href=\"#MATLAB\">MATLAB<\/a><\/li>\n<li><a data-toggle=\"tab\" href=\"#Namd\">Namd<\/a><\/li>\n<li><a data-toggle=\"tab\" href=\"#NetLogo\">NetLogo<\/a><\/li>\n<li><a data-toggle=\"tab\" href=\"#Photoscan\">Photoscan<\/a><\/li>\n<li><a data-toggle=\"tab\" href=\"#Python\">Python<\/a><\/li>\n<li><a data-toggle=\"tab\" href=\"#R\">R\/RStudio<\/a><\/li>\n<li><a data-toggle=\"tab\" href=\"#Julia\">Julia<\/a><\/li>\n<li><a data-toggle=\"tab\" href=\"#Stata\">Stata<\/a><\/li>\n<li><a data-toggle=\"tab\" href=\"#Vmd\">Vmd<\/a><\/li>\n<li><a data-toggle=\"tab\" href=\"#Jupyter\">Jupyter<\/a><\/li>\n<\/ul>\n<div class=\"tab-content\">\n<div id=\"Amber\" class=\"tab-pane fade in active\">\n<p>AMBER is a collection of molecular dynamics simulation programs. Amber is ONLY available to SDU users. To get the currently default version of the amber module, use:<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> module load amber\/16-2016.05\n<\/code><\/pre>\n<\/div>\n<p>An example of sbatch script can be found on the ABACUS2.0 frontend node at the location<code>\/opt\/sys\/documentation\/sbatch-scripts\/amber\/amber-14-2015.05.sh<\/code>. The contents of this file are shown below.<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"c\">#! \/bin\/bash<\/span>\n<span class=\"c\">#<\/span>\n<span class=\"c\">#SBATCH --account test00_gpu      # account<\/span>\n<span class=\"c\">#SBATCH --nodes 1                 # number of nodes<\/span>\n<span class=\"c\">#SBATCH --ntasks-per-node 2       # number of MPI tasks per node<\/span>\n<span class=\"c\">#SBATCH --time 2:00:00            # max time (HH:MM:SS)<\/span>\n<span class=\"c\">#<\/span>\n<span class=\"c\"># Name of the job<\/span>\n<span class=\"c\">#SBATCH --job-name test-1-node<\/span>\n<span class=\"c\">#<\/span>\n<span class=\"c\"># Send email<\/span>\n<span class=\"c\"># Your email address from deic-adm.sdu.dk is used<\/span>\n<span class=\"c\"># Valid types are BEGIN, END, FAIL, REQUEUE, and ALL<\/span>\n<span class=\"c\">#SBATCH --mail-type=ALL<\/span>\n<span class=\"c\">#<\/span>\n<span class=\"c\"># Write stdout\/stderr output, %j is replaced with the job number<\/span>\n<span class=\"c\"># use same path name to write everything to one file<\/span>\n<span class=\"c\"># The files are by default placed in directory you call sbatch<\/span>\n<span class=\"c\">#SBATCH --output slurm-%j.txt<\/span>\n<span class=\"c\">#SBATCH --error  slurm-%j.txt<\/span>\n\n<span class=\"nb\">echo <\/span>Running on <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>hostname<span class=\"k\">)<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Available nodes: <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_NODELIST<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Slurm_submit_dir: <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_SUBMIT_DIR<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Start <span class=\"nb\">time<\/span>: <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>date<span class=\"k\">)<\/span><span class=\"s2\">\"<\/span>\n\n<span class=\"c\"># Load relevant modules<\/span>\nmodule purge\nmodule add amber\/14-2015.05\n\n<span class=\"c\"># Copy all input files to local scratch on all nodes<\/span>\n<span class=\"k\">for<\/span> f in *.inp *.prmtop *.inpcrd <span class=\"p\">;<\/span> <span class=\"k\">do<\/span>\n    sbcast <span class=\"s2\">\"<\/span><span class=\"nv\">$f<\/span><span class=\"s2\">\"<\/span> <span class=\"s2\">\"<\/span><span class=\"nv\">$LOCALSCRATCH<\/span><span class=\"s2\">\/<\/span><span class=\"nv\">$f<\/span><span class=\"s2\">\"<\/span>\n<span class=\"k\">done<\/span>\n\n<span class=\"nb\">cd<\/span> <span class=\"s2\">\"<\/span><span class=\"nv\">$LOCALSCRATCH<\/span><span class=\"s2\">\"<\/span>\n\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> <span class=\"s2\">\"<\/span><span class=\"si\">${<\/span><span class=\"nv\">CUDA_VISIBLE_DEVICES<\/span><span class=\"k\">:-<\/span><span class=\"nv\">NoDevFiles<\/span><span class=\"si\">}<\/span><span class=\"s2\">\"<\/span> !<span class=\"o\">=<\/span> NoDevFiles <span class=\"o\">]<\/span><span class=\"p\">;<\/span> <span class=\"k\">then<\/span>\n    <span class=\"c\"># We have access to at least one GPU<\/span>\n    <span class=\"nv\">cmd<\/span><span class=\"o\">=<\/span>pmemd.cuda.MPI\n<span class=\"k\">else<\/span>\n    <span class=\"c\"># no GPUs available<\/span>\n    <span class=\"nv\">cmd<\/span><span class=\"o\">=<\/span>pmemd.MPI\n<span class=\"k\">fi<\/span>\n\n<span class=\"nb\">export <\/span><span class=\"nv\">INPF<\/span><span class=\"o\">=<\/span><span class=\"s2\">\"<\/span><span class=\"nv\">$LOCALSCRATCH<\/span><span class=\"s2\">\/input\"<\/span>\n<span class=\"nb\">export <\/span><span class=\"nv\">OUPF<\/span><span class=\"o\">=<\/span><span class=\"s2\">\"<\/span><span class=\"nv\">$LOCALSCRATCH<\/span><span class=\"s2\">\/input\"<\/span>\nsrun <span class=\"s2\">\"<\/span><span class=\"nv\">$cmd<\/span><span class=\"s2\">\"<\/span> -O -i em.inp -o <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_SUBMIT_DIR<\/span><span class=\"s2\">\/em.out\"<\/span> -r em.rst <span class=\"se\">\\<\/span>\n     -p test.prmtop -c test.inpcrd -ref test.inpcrd\n\n<span class=\"nb\">echo <\/span>Done.<\/code><\/pre>\n<\/div>\n<hr>\n<p>For further information: <a href=\"http:\/\/ambermd.org\/\">http:\/\/ambermd.org\/<\/a><br>Versions available:<\/p>\n<ul>\n<li>amber\/14-2015.04<\/li>\n<li>amber\/14-2015.05<\/li>\n<li>amber\/16-2016.05 (default)<\/li>\n<li>amber\/16-2017.02<\/li>\n<li>amber\/16-2017.04<\/li>\n<\/ul>\n<\/div>\n<div id=\"Comsol\" class=\"tab-pane fade\">\n<p>COMSOL Multiphysics is a finite element analysis, solver and Simulation Software \/ FEA Software package for various physics and engineering applications, especially coupled phenomena, or multiphysics. COMSOL is currently only available for a&nbsp;<em>very small&nbsp;<\/em>set of users. Contact us if you want to use COMSOL to hear about your options. By default, COMSOL creates a lot of small files in your<code>$HOME\/.comsol<\/code>folder. These files may fill up your home directory, after which COMSOL and other programs are not able to run. The easiest way to fix this is to use \/tmp for all COMSOL related temporary files (with the side effect that COMSOL settings are not saved between consecutive COMSOL runs).<\/p>\n<pre><code>rm -rf ~\/.comsol\nln -s \/tmp ~\/.comsol\n<\/code><\/pre>\n<p>To get the currently default version of the comsol module, use:<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> module load comsol\/5.2\n<\/code><\/pre>\n<\/div>\n<p>An example of sbatch script can be found on the ABACUS2.0 frontend node at the location<code>\/opt\/sys\/documentation\/sbatch-scripts\/comsol\/comsol-5.1.sh<\/code>. The contents of this file are shown below.<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"c\">#! \/bin\/bash<\/span>\n<span class=\"c\">#<\/span>\n<span class=\"c\">#SBATCH --nodes 1                 # number of nodes<\/span>\n<span class=\"c\">#SBATCH --ntasks-per-node 1       # number of MPI tasks per node<\/span>\n<span class=\"c\">#SBATCH --time 2:00:00            # max time (HH:MM:SS)<\/span>\n\n<span class=\"nb\">echo <\/span>Running on <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>hostname<span class=\"k\">)<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Available nodes: <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_NODELIST<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Slurm_submit_dir: <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_SUBMIT_DIR<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Start <span class=\"nb\">time<\/span>: <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>date<span class=\"k\">)<\/span><span class=\"s2\">\"<\/span>\n\n<span class=\"c\"># Load relevant modules<\/span>\nmodule purge\nmodule add comsol\/5.1\n\n<span class=\"nv\">IN_MPH<\/span><span class=\"o\">=<\/span>GSPmetasurface2D.mph\n<span class=\"nv\">OUT_MPH<\/span><span class=\"o\">=<\/span>out.mph\n\ncomsol -clustersimple batch -inputfile <span class=\"nv\">$IN_MPH<\/span> -outputfile <span class=\"nv\">$OUT_MPH<\/span>\n\n<span class=\"nb\">echo <\/span>Done.\n<\/code><\/pre>\n<\/div>\n<hr>\n<p>For further information: <a href=\"http:\/\/www.comsol.com\/comsol-multiphysics\">http:\/\/www.comsol.com\/comsol-multiphysics<\/a><br>Versions available:<\/p>\n<ul>\n<li>comsol\/5.1<\/li>\n<li>comsol\/5.2a<\/li>\n<li>comsol\/5.2 (default)<\/li>\n<li>comsol\/5.3<\/li>\n<\/ul>\n<\/div>\n<div id=\"Gaussian09\" class=\"tab-pane fade\">\n<p>Gaussian 09 provides state-of-the-art capabilities for electronic structure modeling. Gaussian sbatch job scripts can be generated and submitted using the command<code>subg09 test.com<\/code>. Use<code>subg09 -p test.com<\/code>to see the generated script without submitting it. This module is only available to some SDU users. Contact&nbsp;<a href=\"https:\/\/syddanskuni-my.sharepoint.com\/personal\/dsup_imada_sdu_dk\/Documents\/eScience%20Center\/ABACUS\/30%20Jan%202019%20NAT%20TAP\/support@escience.sdu.dk\">support@escience.sdu.dk<\/a> for more information.<\/p>\n<p>To get the currently default version of the gaussian09 module, use:<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> module load gaussian09\/D.01\n<\/code><\/pre>\n<\/div>\n<p>An example of sbatch script can be found on the ABACUS2.0 frontend node at the location<code>\/opt\/sys\/documentation\/sbatch-scripts\/gaussian09\/gaussian09-D.01.sh<\/code>. The contents of this file are shown below.<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"c\">#! \/bin\/bash<\/span>\n<span class=\"c\">#<\/span>\n<span class=\"c\"># Gaussian job script<\/span>\n<span class=\"c\">#<\/span>\n<span class=\"c\">#<\/span>\n<span class=\"c\">#SBATCH --nodes 1<\/span>\n<span class=\"c\">#SBATCH --job-name test<\/span>\n<span class=\"c\">#SBATCH --time 10:00:00<\/span>\n\n<span class=\"c\"># Setup environment<\/span>\nmodule purge\nmodule add gaussian09\/D.01\n\n<span class=\"c\"># Run Gaussian<\/span>\ng09 &lt; test.com &gt;<span class=\"p\">&amp;<\/span> test.log\n\n<span class=\"c\"># Copy chk file back to workdir<\/span>\n<span class=\"nb\">test<\/span> -r <span class=\"nv\">$GAUSS_SCRDIR<\/span>\/test.chk <span class=\"o\">&amp;&amp;<\/span> cp -u <span class=\"nv\">$GAUSS_SCRDIR<\/span>\/test.chk .\n<\/code><\/pre>\n<\/div>\n<hr>\n<p>For further information: <a href=\"http:\/\/www.gaussian.com\/g_prod\/g09.htm\">http:\/\/www.gaussian.com\/g_prod\/g09.htm<\/a><br>Versions available:<\/p>\n<ul>\n<li>gaussian09\/D.01 (default)<\/li>\n<\/ul>\n<\/div>\n<div id=\"Gaussview\" class=\"tab-pane fade\">\n<p>GaussView is a GUI for Gaussian 09. This module is only available to some SDU users. Contact&nbsp;<a href=\"https:\/\/syddanskuni-my.sharepoint.com\/personal\/dsup_imada_sdu_dk\/Documents\/eScience%20Center\/ABACUS\/30%20Jan%202019%20NAT%20TAP\/support@escience.sdu.dk\">support@escience.sdu.dk<\/a>&nbsp;for more information. To get the currently default version of the gaussview module, use:<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> module load gaussview\/5.0.8\n<\/code><\/pre>\n<\/div>\n<hr>\n<p>For further information: <a href=\"http:\/\/www.gaussian.com\/g_prod\/gv5.htm\">http:\/\/www.gaussian.com\/g_prod\/gv5.htm<\/a><br>Versions available:<\/p>\n<ul>\n<li>gaussview\/5.0.8 (default)<\/li>\n<\/ul>\n<\/div>\n<div id=\"Gromacs\" class=\"tab-pane fade\">\n<p>GROMACS is a collection of molecular dynamics simulation programs. To get the currently default version of the gromacs module, use:<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> module load gromacs\/5.1.2\n<\/code><\/pre>\n<\/div>\n<p>An example of sbatch script can be found on the ABACUS2.0 frontend node at the location<code>\/opt\/sys\/documentation\/sbatch-scripts\/gromacs\/gromacs-5.1.2.sh<\/code>. The contents of this file are shown below.<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"c\">#! \/bin\/bash<\/span>\n<span class=\"c\">#SBATCH --account sdutest00_gpu<\/span>\n<span class=\"c\">#SBATCH --nodes 8<\/span>\n<span class=\"c\">#SBATCH --time 24:00:00<\/span>\n<span class=\"c\">#SBATCH --mail-type=ALL<\/span>\n<span class=\"c\">#<\/span>\n<span class=\"c\"># MPI ranks per node:<\/span>\n<span class=\"c\"># * GPU nodes.....: one rank per GPU card, i.e., 2<\/span>\n<span class=\"c\"># * Slim\/fat nodes: one rank per CPU core, i.e., 24<\/span>\n<span class=\"c\">#SBATCH --ntasks-per-node 2<\/span>\n\n<span class=\"nb\">echo <\/span>Running on <span class=\"k\">$(<\/span>hostname<span class=\"k\">)<\/span>\n<span class=\"nb\">echo <\/span>Available nodes: <span class=\"nv\">$SLURM_NODELIST<\/span>\n<span class=\"nb\">echo <\/span>Slurm_submit_dir: <span class=\"nv\">$SLURM_SUBMIT_DIR<\/span>\n<span class=\"nb\">echo <\/span>Start <span class=\"nb\">time<\/span>: <span class=\"k\">$(<\/span>date<span class=\"k\">)<\/span>\n\nmodule purge\nmodule add gromacs\/5.1.2\n\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> <span class=\"s2\">\"<\/span><span class=\"si\">${<\/span><span class=\"nv\">CUDA_VISIBLE_DEVICES<\/span><span class=\"k\">:-<\/span><span class=\"nv\">NoDevFiles<\/span><span class=\"si\">}<\/span><span class=\"s2\">\"<\/span> !<span class=\"o\">=<\/span> NoDevFiles <span class=\"o\">]<\/span><span class=\"p\">;<\/span> <span class=\"k\">then<\/span>\n    <span class=\"nv\">cmd<\/span><span class=\"o\">=<\/span><span class=\"s2\">\"gmx_gpu_mpi mdrun\"<\/span>\n<span class=\"k\">else<\/span>\n    <span class=\"nv\">cmd<\/span><span class=\"o\">=<\/span><span class=\"s2\">\"gmx_mpi mdrun\"<\/span>\n<span class=\"k\">fi<\/span>\n\n<span class=\"c\"># Cores per MPI rank<\/span>\n<span class=\"nv\">OMP<\/span><span class=\"o\">=<\/span><span class=\"k\">$((<\/span> <span class=\"m\">24<\/span> <span class=\"o\">\/<\/span> <span class=\"nv\">$SLURM_NTASKS_PER_NODE<\/span> <span class=\"k\">))<\/span>\n\n<span class=\"c\"># prod is the name of the input file<\/span>\nsrun <span class=\"nv\">$cmd<\/span> -pin on -ntomp <span class=\"nv\">$OMP<\/span> -notunepme -deffnm prod -cpi prod.cpt -append\n<\/code><\/pre>\n<\/div>\n<hr>\n<p>For further information: <a href=\"http:\/\/www.gromacs.org\/\">http:\/\/www.gromacs.org\/<\/a><br>Versions available:<\/p>\n<ul>\n<li>gromacs\/4.5.7<\/li>\n<li>gromacs\/4.5.7-p2.3.2<\/li>\n<li>gromacs\/4.6.7<\/li>\n<li>gromacs\/5.0.4-openmpi<\/li>\n<li>gromacs\/5.0.4<\/li>\n<li>gromacs\/5.0.5<\/li>\n<li>gromacs\/5.0.6<\/li>\n<li>gromacs\/5.1 (default)<\/li>\n<li>gromacs\/5.1.2 (default)<\/li>\n<li>gromacs\/5.1.4<\/li>\n<li>gromacs\/5.1.4-p2.3.2<\/li>\n<li>gromacs\/2016.2<\/li>\n<li>gromacs\/2016.3<\/li>\n<li>gromacs\/2016.3-p2.3.2<\/li>\n<li>gromacs\/2016.4<\/li>\n<li>gromacs\/2016.4-dp<\/li>\n<li>gromacs\/2018.2<\/li>\n<\/ul>\n<\/div>\n<div id=\"MATLAB\" class=\"tab-pane fade\">\n<p>MATLAB (MATrix LABoratory) is a numerical computing environment developed by MathWorks. Matlab allows matrix manipulations, plotting of functions and data and implementation of algorithms. Note that the matlab file, e.g.,<code>test.m<\/code><em>must&nbsp;<\/em>include<code>exit<\/code>as the line to ensure that Matlab exits correctly. MATLAB is currently available for&nbsp;<em>most&nbsp;<\/em>of our academic users. For further information, see the web page on our<a href=\"http:\/\/escience.sdu.dk\/index.php\/matlab-documentation\/\">MATLAB Hosting Provider Agreement<\/a>.<\/p>\n<p>For using MATLAB together with a MATLAB GUI running on your own computer\/laptop, you may want to look at our&nbsp;<a href=\"http:\/\/escience.sdu.dk\/index.php\/matlab-documentation\/\">MATLAB documentation page<\/a>. The page also contains further information relevant for any MATLAB user at ABACUS2.0.<\/p>\n<p>To get the currently default version of the matlab module, use:<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> module load matlab\/R2016a\n<\/code><\/pre>\n<\/div>\n<p>An example of sbatch script can be found on the ABACUS2.0 frontend node at the location<code>\/opt\/sys\/documentation\/sbatch-scripts\/matlab\/matlab-R2016a.sh<\/code>. The contents of this file are shown below.<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"c\">#! \/bin\/bash<\/span>\n<span class=\"c\">#<\/span>\n<span class=\"c\">#SBATCH --nodes 1                 # number of nodes<\/span>\n<span class=\"c\">#SBATCH --time 2:00:00            # max time (HH:MM:SS)<\/span>\n\n<span class=\"nb\">echo <\/span>Running on <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>hostname<span class=\"k\">)<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Available nodes: <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_NODELIST<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Slurm_submit_dir: <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_SUBMIT_DIR<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Start <span class=\"nb\">time<\/span>: <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>date<span class=\"k\">)<\/span><span class=\"s2\">\"<\/span>\n\n<span class=\"c\"># Load relevant modules<\/span>\nmodule purge\nmodule add matlab\/R2016a\n\n<span class=\"c\"># Run the MATLAB code available in matlab_code.m<\/span>\n<span class=\"c\"># (note the missing .m)<\/span>\nmatlab -nodisplay -r matlab_code\n\n<span class=\"nb\">echo <\/span>Done.\n<\/code><\/pre>\n<\/div>\n<hr>\n<p>For further information: <a href=\"http:\/\/se.mathworks.com\/products\/matlab\/\">http:\/\/se.mathworks.com\/products\/matlab\/<\/a><br>Versions available:<\/p>\n<ul>\n<li>matlab\/R2015a<\/li>\n<li>matlab\/R2015b<\/li>\n<li>matlab\/R2016a (default)<\/li>\n<li>matlab\/R2016b<\/li>\n<li>matlab\/R2017a<\/li>\n<li>matlab\/R2017b<\/li>\n<li>matlab\/R2018a<\/li>\n<\/ul>\n<\/div>\n<div id=\"Namd\" class=\"tab-pane fade\">\n<p>NAMD is a scalable parallel molecular dynamics package. To get the currently default version of the namd module, use:<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> module load namd\/2.10\n<\/code><\/pre>\n<\/div>\n<p>An example of sbatch script can be found on the ABACUS2.0 frontend node at the location<code>\/opt\/sys\/documentation\/sbatch-scripts\/namd\/namd-2.10.sh<\/code>. The contents of this file are shown below.<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"c\">#!\/bin\/bash<\/span>\n<span class=\"c\">#<\/span>\n<span class=\"c\">#SBATCH --account         sysops_gpu<\/span>\n<span class=\"c\">#SBATCH --time            00:10:00<\/span>\n<span class=\"c\">#SBATCH --nodes           4<\/span>\n<span class=\"c\">#SBATCH --ntasks-per-node 1<\/span>\n<span class=\"c\">#SBATCH --mail-type       FAIL<\/span>\n\n<span class=\"c\"># Also see<\/span>\n<span class=\"c\"># http:\/\/www.ks.uiuc.edu\/Research\/namd\/wiki\/index.cgi?NamdOnSLURM<\/span>\n\n<span class=\"c\"># Specify input file at submission using<\/span>\n<span class=\"c\">#    sbatch namd-test.sh \/path\/to\/input.namd<\/span>\n<span class=\"c\"># Default value is apoa1\/apoa1.namd<\/span>\n<span class=\"nv\">INPUT<\/span><span class=\"o\">=<\/span><span class=\"si\">${<\/span><span class=\"nv\">1<\/span><span class=\"p\">-apoa1\/apoa1.namd<\/span><span class=\"si\">}<\/span>\n\n<span class=\"nb\">echo <\/span>Running on <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>hostname<span class=\"k\">)<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Available nodes: <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_NODELIST<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Slurm_submit_dir: <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_SUBMIT_DIR<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Start <span class=\"nb\">time<\/span>: <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>date<span class=\"k\">)<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo<\/span>\n\nmodule purge\nmodule add namd\n\n<span class=\"c\">#<\/span>\n<span class=\"c\"># Find version of namd command to use<\/span>\n<span class=\"c\">#<\/span>\n<span class=\"nv\">cmd<\/span><span class=\"o\">=<\/span>namd2\n\n<span class=\"c\"># Should we use the MPI version?<\/span>\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_NNODES<\/span><span class=\"s2\">\"<\/span> -gt <span class=\"m\">1<\/span> <span class=\"o\">]<\/span><span class=\"p\">;<\/span> <span class=\"k\">then<\/span>\n    <span class=\"nv\">cmd<\/span><span class=\"o\">=<\/span><span class=\"s2\">\"<\/span><span class=\"nv\">$cmd<\/span><span class=\"s2\">-mpi\"<\/span>\n<span class=\"k\">fi<\/span>\n\n<span class=\"c\"># Should we use the CUDA version<\/span>\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> <span class=\"s2\">\"<\/span><span class=\"si\">${<\/span><span class=\"nv\">CUDA_VISIBLE_DEVICES<\/span><span class=\"k\">:-<\/span><span class=\"nv\">NoDevFiles<\/span><span class=\"si\">}<\/span><span class=\"s2\">\"<\/span> !<span class=\"o\">=<\/span> NoDevFiles <span class=\"o\">]<\/span><span class=\"p\">;<\/span> <span class=\"k\">then<\/span>\n    <span class=\"nv\">cmd<\/span><span class=\"o\">=<\/span><span class=\"s2\">\"<\/span><span class=\"nv\">$cmd<\/span><span class=\"s2\">-cuda\"<\/span>\n<span class=\"k\">fi<\/span>\n\n<span class=\"k\">if<\/span> <span class=\"o\">[<\/span> <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_NNODES<\/span><span class=\"s2\">\"<\/span> -gt <span class=\"m\">1<\/span> <span class=\"o\">]<\/span><span class=\"p\">;<\/span> <span class=\"k\">then<\/span>\n    <span class=\"c\"># Worker threads per MPI rank<\/span>\n    <span class=\"nv\">WT<\/span><span class=\"o\">=<\/span><span class=\"k\">$((<\/span> <span class=\"m\">24<\/span> <span class=\"o\">\/<\/span> <span class=\"nv\">$SLURM_NTASKS_PER_NODE<\/span> <span class=\"o\">-<\/span> <span class=\"m\">1<\/span> <span class=\"k\">))<\/span>\n    <span class=\"nb\">echo <\/span>srun <span class=\"s2\">\"<\/span><span class=\"nv\">$cmd<\/span><span class=\"s2\">\"<\/span> ++ppn <span class=\"s2\">\"<\/span><span class=\"nv\">$WT<\/span><span class=\"s2\">\"<\/span> <span class=\"s2\">\"<\/span><span class=\"nv\">$INPUT<\/span><span class=\"s2\">\"<\/span>\n    srun      <span class=\"s2\">\"<\/span><span class=\"nv\">$cmd<\/span><span class=\"s2\">\"<\/span> ++ppn <span class=\"s2\">\"<\/span><span class=\"nv\">$WT<\/span><span class=\"s2\">\"<\/span> <span class=\"s2\">\"<\/span><span class=\"nv\">$INPUT<\/span><span class=\"s2\">\"<\/span>\n<span class=\"k\">else<\/span>\n    <span class=\"c\"># running on a single node<\/span>\n    charmrun ++local +p12 <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>which <span class=\"s2\">\"<\/span><span class=\"nv\">$cmd<\/span><span class=\"s2\">\"<\/span><span class=\"k\">)<\/span><span class=\"s2\">\"<\/span> <span class=\"s2\">\"<\/span><span class=\"nv\">$INPUT<\/span><span class=\"s2\">\"<\/span>\n<span class=\"k\">fi<\/span><\/code><\/pre>\n<\/div>\n<hr>\n<p>For further information: <a href=\"http:\/\/www.ks.uiuc.edu\/Research\/namd\/\">http:\/\/www.ks.uiuc.edu\/Research\/namd\/<\/a><br>Versions available:<\/p>\n<ul>\n<li>namd\/2.10 (default)<\/li>\n<\/ul>\n<\/div>\n<div id=\"NetLogo\" class=\"tab-pane fade\">\n<p>NetLogo is a multi-agent programmable modeling environment. To get the currently default version of the netlogo module, use:<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> module load netlogo\/5.2.0\n<\/code><\/pre>\n<\/div>\n<hr>\n<p>For further information: <a href=\"https:\/\/ccl.northwestern.edu\/netlogo\/\">https:\/\/ccl.northwestern.edu\/netlogo\/<\/a><br>Versions available:<\/p>\n<ul>\n<li>netlogo\/5.2.0 (default)<\/li>\n<\/ul>\n<\/div>\n<div id=\"Photoscan\" class=\"tab-pane fade\">\n<p>Agisoft PhotoScan is a stand-alone software product that performs photogrammetric processing of digital images and generates 3D spatial data to be used in GIS applications, cultural heritage documentation and visual effects production as well as for indirect measurements of objects of various scales. This module is only available to some users. Contact&nbsp;<a href=\"https:\/\/syddanskuni-my.sharepoint.com\/personal\/dsup_imada_sdu_dk\/Documents\/eScience%20Center\/ABACUS\/30%20Jan%202019%20NAT%20TAP\/support@escience.sdu.dk\">support@escience.sdu.dk<\/a> for further information. To get the currently default version of the photoscan module, use:<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> module load photoscan\/1.2.4\n<\/code><\/pre>\n<\/div>\n<hr>\n<p>For further information: <a href=\"http:\/\/www.agisoft.com\/\">http:\/\/www.agisoft.com\/<\/a><br>Versions available:<\/p>\n<ul>\n<li>photoscan\/1.1.6<\/li>\n<li>photoscan\/1.2.4 (default)<\/li>\n<\/ul>\n<\/div>\n<div id=\"Python\" class=\"tab-pane fade\">\n<p>A lot of research projects use Python and different Python packages\/modules to achieve results. This page describes how to easily setup a workable python environment with your own python packages inside. For new projects, you should in particular consider whether you want to use the 2.x or 3.x variant of Python. The two versions are not compatible and in some cases, you may have to use an older 2.7.x versions of Python due to some of your packages not working with Python 3.x.<\/p>\n<h4 id=\"python-distributions\">Python distributions<\/h4>\n<p>At ABACUS2.0 we maintain two variants of Python and several versions of each:<\/p>\n<ul>\n<li>python\/2.7.9<\/li>\n<li>python\/2.7.10<\/li>\n<li>python\/2.7.11 (default)<\/li>\n<li>python\/2.7.12<\/li>\n<li>python\/2.7.13<\/li>\n<li>python\/3.4.3<\/li>\n<li>python\/3.5.1<\/li>\n<li>python\/3.5.2<\/li>\n<li>python\/3.6.0<\/li>\n<li>python\/3.6.3<\/li>\n<li>python-intel\/2.7.10-184913<\/li>\n<li>python-intel\/2.7.11 (default)<\/li>\n<li>python-intel\/2.7.12<\/li>\n<li>python-intel\/2.7.12.35<\/li>\n<li>python-intel\/2.7.14<\/li>\n<li>python-intel\/3.5.0-185146<\/li>\n<li>python-intel\/3.5.1<\/li>\n<li>python-intel\/3.5.2<\/li>\n<li>python-intel\/3.5.2.35<\/li>\n<li>python-intel\/3.6.3<\/li>\n<\/ul>\n<p>The vanilla Python versions (<code>python<\/code>) includes Python and a few extra packages including in particular<code>virtualenv<\/code>(see below). For further information, have a look at the official Python home page<\/p>\n<ul>\n<li><a href=\"https:\/\/www.python.org\/\">python.org<\/a><\/li>\n<\/ul>\n<p>The Intel optimized version of Python (<code>python-intel<\/code>) has been compiled by Intel and includes a lot of widely used python packages including&nbsp;<code>numpy<\/code>,<code>scipy<\/code>,<code>pandas<\/code>,<code>matplotlib<\/code>,<code>virtualenv<\/code>, etc. for more information look at the official Intel Python home page:<\/p>\n<ul>\n<li><a href=\"https:\/\/software.intel.com\/en-us\/intel-distribution-for-python\">Intel Distribution for Python<\/a><\/li>\n<\/ul>\n<p>To use a particular version of python simply use&nbsp;<code>module add<\/code>:<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> module add python-intel\/3.5.2.35\n<\/code><\/pre>\n<\/div>\n<h4 id=\"adding-extra-packages\">Adding extra packages<\/h4>\n<p>In many cases you will need extra python packages for your project. In the following we describe two ways to do this. You should consider both of them and use the one most suitable for your project.<\/p>\n<p>As noted above, also consider using one of the&nbsp;<code>python-intel<\/code>variants as this already contains many packages, including in particular some of the packages you may need.<\/p>\n<h4 id=\"adding-extra-packages-1-using-pip-user\">Adding extra packages #1 &#8211; using&nbsp;<code>pip --user<\/code><\/h4>\n<p>In the simple case, you only need one\/a few packages and only need this for yourself. In this case, use&nbsp;<code>pip install --user<\/code> to install the module your own home directory as shown below i.e., first use&nbsp;<code>module add<\/code> to select the right python version, next use&nbsp;<code>pip install<\/code><\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> module add python-intel\/3.5.2.35\n<span class=\"gp\">testuser@fe1:~$<\/span> pip install --user Pillow\n<span class=\"go\">Collecting Pillow<\/span>\n<span class=\"go\">  Downloading Pillow-4.1.0-cp35-cp35m-manylinux1_x86_64.whl (5.7MB)<\/span>\n<span class=\"go\">    100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5.7MB 204kB\/s<\/span>\n<span class=\"go\">Collecting olefile (from Pillow)<\/span>\n<span class=\"go\">  Downloading olefile-0.44.zip (74kB)<\/span>\n<span class=\"go\">    100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 81kB 8.6MB\/s<\/span>\n<span class=\"go\">Building wheels for collected packages: olefile<\/span>\n<span class=\"go\">  Running setup.py bdist_wheel for olefile ... done<\/span>\n<span class=\"go\">  Stored in directory: \/home\/testuser\/.cache\/pip\/wheels\/20\/...<\/span>\n<span class=\"go\">Successfully built olefile<\/span>\n<span class=\"go\">Installing collected packages: olefile, Pillow<\/span>\n<span class=\"go\">Successfully installed Pillow-4.1.0 olefile-0.44<\/span><\/code><\/pre>\n<\/div>\n<p>Files are installed in your home directory (in<code>~\/.local\/<\/code>).<\/p>\n<p>Things to consider:<\/p>\n<ul>\n<li>The packages are only available to your own user, not to anybody else.<\/li>\n<li>If you change the Python version selected with<code>module add<\/code>, the module may not work, and you may have to redo this.<\/li>\n<\/ul>\n<h4 id=\"adding-extra-packages-2-using-virtualenv\">Adding extra packages #2 &#8211; using<code>virtualenv<\/code><\/h4>\n<p><code>virtualenv<\/code>is a tool that can be used to create isolated Python environments. In each environment you select the Python version and Python packages needed for you project. If you keep old virtualenv environments, it is possible to later redo some of the job scripts in the exact same Python environment as when you ran the script the first time.<\/p>\n<h4 id=\"creating-the-environment\">Creating the environment<\/h4>\n<p>The Python files need to be placed in a directory. In the following examples we use<code>\/work\/sdutest\/tensor<\/code>to install our own version of<a href=\"https:\/\/www.tensorflow.org\/\">Tensorflow<\/a>. You should instead use a directory within one of your own project directories.<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> module purge\n<span class=\"gp\">testuser@fe1:~$<\/span> <span class=\"c\"># tensorflow also requires the CUDA and cudnn modules<\/span>\n<span class=\"gp\">testuser@fe1:~$<\/span> module add python\/3.5.2 cuda\/8.0.44 cudnn\/5.1\n<span class=\"gp\">testuser@fe1:~$<\/span> virtualenv \/work\/sdutest\/tensor-1.2\n<span class=\"go\">PYTHONHOME is set.  You *must* activate the virtualenv before using it<\/span>\n<span class=\"go\">Using base prefix '\/opt\/sys\/apps\/python\/3.5.2'<\/span>\n<span class=\"go\">New python executable in \/work\/sdutest\/tensor-1.2\/bin\/python3.5<\/span>\n<span class=\"go\">Also creating executable in \/work\/sdutest\/tensor-1.2\/bin\/python<\/span>\n<span class=\"go\">Installing setuptools, pip, wheel...done.<\/span>\n<span class=\"gp\">testuser@fe1:~$<\/span> <span class=\"nb\">source<\/span> \/work\/sdutest\/tensor-1.2\/bin\/activate\n<span class=\"go\">(tensor-1.2) testuser@fe1:~$ # you are now inside your own Python environment<\/span><\/code><\/pre>\n<\/div>\n<p>Note the line with<code>source \/work\/sdutest\/tensor-1.2\/bin\/activate<\/code>. You&#8217;ll need to repeat this step every time before you actually use your new Python environment.<\/p>\n<p>We suggest to edit the<code>activate<\/code>script to include the<code>module purge<\/code>and<code>module add<\/code>lines from above to easily setup the correct environment every time you use this. The two lines<em>must<\/em>be added to the top of the file.<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> nano \/work\/sdutest\/tensor-1.2\/bin\/activate\n<span class=\"go\"># add module purge and module add ... lines at the top<\/span><\/code><\/pre>\n<\/div>\n<h4 id=\"adding-packages\">Adding packages<\/h4>\n<p>After the initial package setup, you can use<code>pip install<\/code>as you would if you had installed Python yourself, e.g.,<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> <span class=\"nb\">source<\/span> \/work\/sdutest\/tensor-1.2\/bin\/activate\n<span class=\"go\">(tensor-1.2) testuser@fe1:~$ which pip<\/span>\n<span class=\"go\">\/work\/sdutest\/tensor-1.2\/bin\/pip<\/span>\n<span class=\"go\">(tensor-1.2) testuser@fe1:~$ pip3 install --upgrade tensorflow-gpu<\/span>\n<span class=\"go\">Collecting tensorflow-gpu<\/span>\n<span class=\"go\">  Downloading tensorflow_gpu-1.1.0-cp35-cp35m-manylinux1_x86_64.whl (84.1MB)<\/span>\n<span class=\"go\">    100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 84.1MB 18kB\/s<\/span>\n<span class=\"go\">Collecting protobuf&gt;=3.2.0 (from tensorflow-gpu)<\/span>\n<span class=\"go\">...<\/span>\n<span class=\"go\">Installing collected packages: protobuf, numpy, werkzeug, tensorflow-gpu<\/span>\n<span class=\"go\">Successfully installed numpy-1.12.1 protobuf-3.3.0 tensorflow-gpu-1.1.0 werkzeug-0.12.2<\/span>\n<span class=\"go\">(tensor-1.2) testuser@fe1:~$<\/span><\/code><\/pre>\n<\/div>\n<h4 id=\"using-the-environment\">Using the environment<\/h4>\n<p>If you added the<code>module purge<\/code>and<code>module add ...<\/code>lines as described in the first step, you simply need to<code>source<\/code>the<code>activate<\/code>script everytime before starting to use the Python environment.<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> <span class=\"nb\">source<\/span> \/work\/sdutest\/tensor-1.2\/bin\/activate\n<span class=\"go\">(tensor-1.2) testuser@fe1:~$ # you are now inside your own Python environment<\/span><\/code><\/pre>\n<\/div>\n<p>Similarly, in your Slurm job scripts you should add the<code>source<\/code>line as shown below:<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"c\">#! \/bin\/bash<\/span>\n<span class=\"c\">#SBATCH --account sdutest_gpu     # account<\/span>\n<span class=\"c\">#SBATCH --time 2:00:00            # max time (HH:MM:SS)<\/span>\n\n<span class=\"nb\">echo <\/span>Running on <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>hostname<span class=\"k\">)<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Available nodes: <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_NODELIST<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Slurm_submit_dir: <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_SUBMIT_DIR<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Start <span class=\"nb\">time<\/span>: <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>date<span class=\"k\">)<\/span><span class=\"s2\">\"<\/span>\n\n<span class=\"c\"># Load the Python environment<\/span>\n<span class=\"nb\">source<\/span> \/work\/sdutest\/tensor-1.2\/bin\/activate\n\n<span class=\"c\"># Start your python application<\/span>\npython ...\n\n<span class=\"nb\">echo <\/span>Done.\n<\/code><\/pre>\n<\/div>\n<\/div>\n<div id=\"R\" class=\"tab-pane fade\">\n<p>R is a programming language and software environment for statistical computing and graphics. The R language is widely used among statisticians and data miners for developing statistical software and data analysis.To get the currently default version of the R module, use<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> module load R\/3.2.5\n<\/code><\/pre>\n<\/div>\n<p>An example sbatch script can be found on the ABACUS 2.0 frontend node at the location<code>\/opt\/sys\/documentation\/sbatch-scripts\/R\/R-3.2.2.sh<\/code>. The contents of this file is shown below.<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"c\">#! \/bin\/bash<\/span>\n<span class=\"c\">#SBATCH --account test00_gpu<\/span>\n<span class=\"c\">#SBATCH --nodes 1<\/span>\n<span class=\"c\">#SBATCH --time 1:00:00<\/span>\n\n<span class=\"nb\">echo <\/span>Running on <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>hostname<span class=\"k\">)<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Available nodes: <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_NODELIST<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Slurm_submit_dir: <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_SUBMIT_DIR<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Start <span class=\"nb\">time<\/span>: <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>date<span class=\"k\">)<\/span><span class=\"s2\">\"<\/span>\n\nmodule purge\nmodule add R\/3.2.2\n\nR --vanilla &lt; Fibonacci.R\n<\/code><\/pre>\n<\/div>\n<hr>\n<p>Further information: <a href=\"http:\/\/www.r-project.org\/\">http:\/\/www.r-project.org\/<\/a><\/p>\n<p>Versions available:<\/p>\n<ul>\n<li>R\/3.2.2<\/li>\n<li>R\/3.2.5 (default)<\/li>\n<li>R\/3.3.1<\/li>\n<li>R\/3.3.2<\/li>\n<li>R\/3.5.0<\/li>\n<li>R\/3.5.1<\/li>\n<\/ul>\n<p>An integrated development environment for R, <a href=\"https:\/\/support.rstudio.com\/hc\/en-us\/sections\/200107586-Using-the-RStudio-IDE\">RStudio Desktop<\/a>, is available as module file:<\/p>\n<pre><code><span class=\"gp\">testuser@fe1:~$ <\/span>module load rstudio\/1.1.456<\/code><\/pre>\n<p>The software only works in an interactive session, using X or VNC.<\/p>\n<\/div>\n<div id=\"Julia\" class=\"tab-pane fade in active\">\n<p>Julia is a high-level modern open-source programming language for scientific, mathematical and numeric computing. Julia provides the functionality, ease-of-use and intuitive syntax of R, Python, and Matlab combined with the speed, capacity and performance of Fortran and C\/C++. Furthermore, Julia provides multithreading, parallel, distributed and supercomputing capabilities as well as arbitrarily large scalability.<\/p>\n<h4 id=\"julia-distributions\">Available distributions<\/h4>\n<ul>\n<li>julia\/0.6.4<\/li>\n<li>julia\/1.0.2 (default)<\/li>\n<\/ul>\n<p>To use a particular version of Julia simply use&nbsp;<code>module add<\/code>&nbsp;and run the command&nbsp;<code>julia<\/code>&nbsp;to open the Julia REPL:<\/p>\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> module add julia\/1.0.2<\/code> <code><span class=\"gp\">testuser@fe1:~$<\/span> julia\n               _\n   _       _ _(_)_     |  Documentation: https:\/\/docs.julialang.org\n  (_)     | (_) (_)    |\n   _ _   _| |_  __ _   |  Type \"?\" for help, \"]?\" for Pkg help.\n  | | | | | | |\/ _` |  |\n  | | |_| | | | (_| |  |  Version 1.0.2 (2018-11-08)\n _\/ |\\__'_|_|_|\\__'_|  |  Official https:\/\/julialang.org\/ release\n|__\/                   |\n\njulia&gt;<\/code><\/pre>\n<p>Julia provides a built-in package manager, <a href=\"https:\/\/docs.julialang.org\/en\/v1\/stdlib\/Pkg\/index.html\">Pkg<\/a>, for installing additional packages that are written in Julia. Version and dependency management is handled automatically by Pkg. The default installation directory of a Julia package is in the folder&nbsp;<code>$HOME\/.julia\/<\/code>. A new installation folder can be defined by exporting the corresponding environmental variables:<\/p>\n<pre><code><span class=\"gp\">testuser@fe1:~$ #&nbsp;for&nbsp;Julia-v0.6<\/span><\/code> <code><span class=\"gp\">testuser@fe1:~$ module&nbsp;load&nbsp;julia\/0.6.4;&nbsp;<span>export JULIA_PKGDIR=\/work\/sysops\/julia_package_dir\/\n<\/span><\/span><\/code> <code><span class=\"gp\">testuser@fe1:~$ #&nbsp;for&nbsp;Julia-v1.0<\/span><\/code> <code><span class=\"gp\">testuser@fe1:~$ module&nbsp;load&nbsp;julia\/1.0.2;&nbsp;<span>export JULIA_DEPOT_PATH=\/work\/sysops\/julia_package_dir\/<\/span><\/span><\/code><\/pre>\n<p>It is convenient to define the Julia working directory in the&nbsp;<code>\/work\/project<\/code>&nbsp;folder instead of the user&nbsp;<code>$HOME<\/code>&nbsp;directory, because of its limited storage capability.<\/p>\n<p>Further information:&nbsp;<a href=\"https:\/\/julialang.org\/\">https:\/\/julialang.org\/<\/a><\/p>\n<\/div>\n<div id=\"Stata\" class=\"tab-pane fade\">\n<p>Stata is a general-purpose statistical software package created by StataCorp.This module is currently only available to SDU users. Contact support@escience.sdu.dk for more information.To get the currently default version of the stata module, use<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> module load stata\/14.0\n<\/code><\/pre>\n<\/div>\n<p>An example sbatch script can be found on the ABACUS 2.0 frontend node at the location<code>\/opt\/sys\/documentation\/sbatch-scripts\/stata\/stata-14.0.sh<\/code>. The contents of this file is shown below.<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"c\">#! \/bin\/bash<\/span>\n<span class=\"c\">#<\/span>\n<span class=\"c\">#SBATCH --nodes 1                 # number of nodes<\/span>\n<span class=\"c\">#SBATCH --ntasks-per-node 1       # number of MPI tasks per node<\/span>\n<span class=\"c\">#SBATCH --time 2:00:00            # max time (HH:MM:SS)<\/span>\n\n<span class=\"nb\">echo <\/span>Running on <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>hostname<span class=\"k\">)<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Available nodes: <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_NODELIST<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Slurm_submit_dir: <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_SUBMIT_DIR<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Start <span class=\"nb\">time<\/span>: <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>date<span class=\"k\">)<\/span><span class=\"s2\">\"<\/span>\n\n<span class=\"c\"># Load relevant modules<\/span>\nmodule purge\nmodule add stata\/14.0\n\n<span class=\"c\"># stata output is put in example.log<\/span>\nstata -b example.do\n\n<span class=\"nb\">echo <\/span>Done.\n<\/code><\/pre>\n<\/div>\n<hr>\n<p>Further information: <a href=\"https:\/\/www.stata.com\/\">https:\/\/www.stata.com\/<\/a><br>Versions available:<\/p>\n<ul>\n<li>stata\/14.0 (default)<\/li>\n<\/ul>\n<\/div>\n<div id=\"Vmd\" class=\"tab-pane fade\">\n<p>VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.To get the currently default version of the vmd module, use<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"gp\">testuser@fe1:~$<\/span> module load vmd\/1.9.3\n<\/code><\/pre>\n<\/div>\n<p>An example sbatch script can be found on the ABACUS 2.0 frontend node at the location<code>\/opt\/sys\/documentation\/sbatch-scripts\/vmd\/vmd-1.9.3.sh<\/code>. The contents of this file is shown below.<\/p>\n<div class=\"codehilite\">\n<pre><code><span class=\"c\">#! \/bin\/bash<\/span>\n<span class=\"c\">#<\/span>\n<span class=\"c\"># VMD job script<\/span>\n<span class=\"c\">#<\/span>\n<span class=\"c\">#SBATCH --nodes 1<\/span>\n<span class=\"c\">#SBATCH --job-name test<\/span>\n<span class=\"c\">#SBATCH --time 1:00:00<\/span>\n\n<span class=\"nb\">echo <\/span>Running on <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>hostname<span class=\"k\">)<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Available nodes: <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_NODELIST<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Slurm_submit_dir: <span class=\"s2\">\"<\/span><span class=\"nv\">$SLURM_SUBMIT_DIR<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo <\/span>Start <span class=\"nb\">time<\/span>: <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>date<span class=\"k\">)<\/span><span class=\"s2\">\"<\/span>\n<span class=\"nb\">echo<\/span>\n\n<span class=\"c\"># Setup environment<\/span>\nmodule purge\nmodule add vmd\/1.9.3\n\n<span class=\"c\"># Run VMD<\/span>\nvmd -eofexit &lt; test.tcl\n\n<span class=\"c\"># If the TCL script takes arguments, use instead:<\/span>\n<span class=\"c\">#   vmd -e test.tcl -args &lt;arg1&gt; &lt;arg2&gt; ...<\/span>\n<span class=\"c\"># and be sure to place an exit statement at the end<\/span>\n<span class=\"c\"># of the script.<\/span>\n\n<span class=\"nb\">echo <\/span>End <span class=\"nb\">time<\/span>: <span class=\"s2\">\"<\/span><span class=\"k\">$(<\/span>date<span class=\"k\">)<\/span><span class=\"s2\">\"<\/span><\/code><\/pre>\n<\/div>\n<hr>\n<p>Further information: <a href=\"http:\/\/www.ks.uiuc.edu\/Research\/vmd\/\">http:\/\/www.ks.uiuc.edu\/Research\/vmd\/<\/a><br>Versions available:<\/p>\n<ul>\n<li>vmd\/1.9.3 (default)<\/li>\n<\/ul>\n<\/div>\n<div id=\"Jupyter\" class=\"tab-pane fade\">\n<p>Jupyter is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Because Jupyter is a web application, we provide an easy-to-use tool that automatically launches Jupyter jobs on ABACUS 2.0, while forwarding the web interface to the user&#8217;s local browser. The program supports both Jupyter Notebook and JupyterLab, with Python and R kernels<\/p>\n<div><img loading=\"lazy\" src=\"http:\/\/escience.sdu.dk\/wp-content\/uploads\/2018\/08\/run_jupyter.png\" alt=\"\" class=\"alignnone wp-image-4036 size-full\" width=\"782\" height=\"624\" srcset=\"http:\/\/escience.sdu.dk\/wp-content\/uploads\/2018\/08\/run_jupyter.png 782w, http:\/\/escience.sdu.dk\/wp-content\/uploads\/2018\/08\/run_jupyter-300x239.png 300w, http:\/\/escience.sdu.dk\/wp-content\/uploads\/2018\/08\/run_jupyter-768x613.png 768w\" sizes=\"(max-width: 782px) 100vw, 782px\" \/><\/div>\n<p><strong>Download<\/strong><br>The Python program can be downloaded here: <a href=\"https:\/\/github.com\/SDU-eScience\/abacus-jupyter\/releases\/download\/v1.2\/run_jupyter.py\" rel=\"noopener\">MacOS\/Linux<\/a> or <a href=\"https:\/\/github.com\/SDU-eScience\/abacus-jupyter\/releases\/download\/v1.2\/run_jupyter.pyw\" rel=\"noopener\">Windows<\/a><br>If a newer version exists, the program will automatically update when connecting to ABACUS 2.0.<\/p>\n<p><strong>Requirements<\/strong><br>The program has been tested on macOS, Linux, and Windows. The requirements for running the program are listed below.<\/p>\n<ul>\n<li>Python 2.7 or newer<\/li>\n<li>Tkinter python package<\/li>\n<li>OpenSSH for macOS and Linux<\/li>\n<li>PuTTY for Windows<\/li>\n<\/ul>\n<p><strong>Usage<\/strong><br>From the main window of the program, the user can specify the following settings.<\/p>\n<ul>\n<li><strong>Username<\/strong>: Username used to connect to the ABACUS 2.0 supercomputer<\/li>\n<li><strong>SSH Key<\/strong>: This should only be used if the private SSH key is stored in a non-default location<\/li>\n<li><strong>Account<\/strong>: Slurm account used for running the job<\/li>\n<li><strong>Time limit<\/strong>: Slurm wall-time limit for the job<\/li>\n<li><strong>Version<\/strong>: Python and Jupyter version used for running the job<\/li>\n<\/ul>\n<p>In most cases, only the username is mandatory, while the remaining settings can be left with their default value. After pressing the connect button, the program tries to connect to the ABACUS 2.0 supercomputer, with status messages written in the text field below. If the connection is successful, the program submits a new job to the queue system and waits until the job starts running. Please note that the extent of this waiting period depends on the number of available nodes in the chosen slurm queue. After the job starts running, the user can press the &#8220;Open Jupyter in browser&#8221; button, after which the user can start working through the Jupyter webinterface. When closing the program or pressing the disconnect button, the Jupyter job is automatically stopped on the ABACUS 2.0 supercomputer. For this reason, the program must be running while using Jupyter in the browser.<\/p>\n<p><strong>Technical details<\/strong><br>Depending on the chosen version of Python, on the ABACUS 2.0 supercomputer one of the following two modules are loaded.<\/p>\n<ul>\n<li>python\/2.7.14<\/li>\n<li>python\/3.6.3<\/li>\n<\/ul>\n<p>If the user requires additional Python packages, the correct module should be loaded, after which the packages can be installed using pip. For example, if the user uses Python 3.6 and needs numpy, use SSH to access the ABACUS 2.0 supercomputer and run the following two commands.<\/p>\n<div class=\"codehilite\">\n<pre>module load python\/3.6.3\npip install --user numpy<\/pre>\n<\/div>\n<p>For the advanced users, it is possible to e.g. load additional modules and set environmental variables before Jupyter starts running. If the file &#8220;~\/.jupyter\/modules&#8221; exists, this file is source&#8217;d into the submit script before starting Jupyter. For example, if the user needs tensorflow, run the following command:<\/p>\n<div class=\"codehilite\">\n<pre>echo \"module load tensorflow\/1.14\" &gt; ~\/.jupyter\/modules<\/pre>\n<\/div>\n<\/div>\n<\/div>\n\n\n<p>It is possible to install additional kernels inside Jupyter Notebook and Jupyter Lab. This is useful if the user needs to run notebooks other than Python, such as R or Julia, but it also provides a way to load multiple Python interpreters from different environments. For example, the user can create a new Python environment with the commands:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">module load python\/3.6.3<br>python -m venv my_env<\/pre>\n\n\n\n<p>In the new environment <em>my_env<\/em> the user can install all the relevant modules, e.g.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">source .\/my_env\/bin\/activate\npip install --upgrade pip\npip install pandas<\/pre>\n\n\n\n<p style=\"text-align:left\">Then, <em>my_env<\/em> can be added to Jupyter following the steps below:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">source .\/my_env\/bin\/activate\npip install jupyter\npython -m ipykernel install --user --name python3-my_env --display-name \"Python 3 (my_env)\"<\/pre>\n\n\n\n<p>The new IPython kernel is installed in:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">~\/.local\/share\/jupyter\/kernels\/python3-my_env\/<\/pre>\n","protected":false},"excerpt":{"rendered":"<p>At ABACUS2.0 we maintain a short list of standard software available for our users. A few software packages are only available for some research groups. Users are not limited to using the software installed by<a class=\"moretag\" href=\"http:\/\/escience.sdu.dk\/index.php\/supported-applications\/\"> Read more&hellip;<\/a><\/p>\n","protected":false},"author":1,"featured_media":3986,"parent":0,"menu_order":38,"comment_status":"closed","ping_status":"closed","template":"page-templates\/template-fullwidth.php","meta":[],"_links":{"self":[{"href":"http:\/\/escience.sdu.dk\/index.php\/wp-json\/wp\/v2\/pages\/1100"}],"collection":[{"href":"http:\/\/escience.sdu.dk\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"http:\/\/escience.sdu.dk\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"http:\/\/escience.sdu.dk\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/escience.sdu.dk\/index.php\/wp-json\/wp\/v2\/comments?post=1100"}],"version-history":[{"count":44,"href":"http:\/\/escience.sdu.dk\/index.php\/wp-json\/wp\/v2\/pages\/1100\/revisions"}],"predecessor-version":[{"id":4826,"href":"http:\/\/escience.sdu.dk\/index.php\/wp-json\/wp\/v2\/pages\/1100\/revisions\/4826"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/escience.sdu.dk\/index.php\/wp-json\/wp\/v2\/media\/3986"}],"wp:attachment":[{"href":"http:\/\/escience.sdu.dk\/index.php\/wp-json\/wp\/v2\/media?parent=1100"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}