BinAC/Quickstart Guide: Difference between revisions
(Created page with "= Basics = Use the work file system and not your home directory for your calculations. Create a working directory using your username. <source lang="bash"> cd /beegfs/work/ mk...") |
No edit summary |
||
Line 37: | Line 37: | ||
</source> |
</source> |
||
Be aware that we allow node sharing. Do not disturb the calculations of other users. |
Be aware that we allow node sharing. Do not disturb the calculations of other users. |
||
= Simple Script Job = |
|||
Use your favourite text editor to create a script. |
|||
<source lang="bash"> |
|||
#PBS -l nodes=1:ppn=1 |
|||
#PBS -l walltime=00:05:00 |
|||
#PBS -S /bin/bash |
|||
#PBS -N Simple_Script_Job |
|||
#PBS -j oe |
|||
#PBS -o LOG |
|||
#PBS -n |
|||
cd $PBS_O_WORKDIR |
|||
echo "my Username is:" |
|||
whoami |
|||
echo "My job is running on node:" |
|||
uname -a |
|||
module load chem/gromacs/4.6.7-gnu-4.9 |
|||
g_luck |
|||
</source> |
|||
Submit the job using |
|||
<source lang="bash"> |
|||
qsub -q short script.sh |
|||
</source> |
|||
Take a note of your jobID. |
|||
= Killing a Job = |
|||
Let's assume you build a Homer and want to stop/kill/remove a running job. |
|||
<source lang="bash"> |
|||
qdel <jobID> |
|||
</source> |
|||
= Fancy Script Job = |
|||
<source lang="bash"> |
|||
#PBS -l nodes=1:ppn=28:gpus=4:exclusive_process |
|||
#PBS -l walltime=36:00:00 |
|||
#PBS -S /bin/bash |
|||
#PBS -N Gromacs_GPU |
|||
#PBS -j oe |
|||
#PBS -o LOG |
|||
#PBS -n |
|||
module purge |
|||
module load devel/cuda/7.5 mpi/mvapich2/2.1-gnu-4.9 numlib/openblas/0.2.18-gnu-4.9 |
|||
source /opt/bwhpc/common/chem/gromacs/2016_gnu-4.9/bin/GMXRC.bash |
|||
cd $PBS_O_WORKDIR |
|||
gmx grompp -f NPT.mdp -c protein.pdb -n index.ndx -p topol.top |
|||
mdrun_s_gpu -v -deffnm NPT_protein -pin on -ntmpi 4 -ntomp 7 -gpu_id 0123 -s topol.tpr |
|||
</source> |
|||
Submit with |
|||
<source lang="bash"> |
|||
qsub -q gpu script.sh |
|||
</source> |
|||
There are tons of options, details and caveats. If there is anything not working, as you like, send an email to [hpcmaster@uni-tuebingen.de hpcmaster]. |
Revision as of 11:59, 8 February 2017
Basics
Use the work file system and not your home directory for your calculations. Create a working directory using your username.
cd /beegfs/work/
mkdir <username>
cd <username>
Do not use the login nodes to carry out any calculations or heavy load file transfers.
Check the Queue
To check all running and queued jobs.
qstat
Just your own jobs.
qstat -u <username>
Simple Interactive Job
To start a 1 core job on a compute node providing a remote shell.
qsub -q short -l nodes=1:ppn=1 -I
The same but requesting the whole node.
qsub -q short -l nodes=1:ppn=28 -I
Standard Unix commands are directly available, for everything else use the modules.
module avail
Just an example
module load chem/gromacs/4.6.7-gnu-4.9
g_luck
Be aware that we allow node sharing. Do not disturb the calculations of other users.
Simple Script Job
Use your favourite text editor to create a script.
#PBS -l nodes=1:ppn=1
#PBS -l walltime=00:05:00
#PBS -S /bin/bash
#PBS -N Simple_Script_Job
#PBS -j oe
#PBS -o LOG
#PBS -n
cd $PBS_O_WORKDIR
echo "my Username is:"
whoami
echo "My job is running on node:"
uname -a
module load chem/gromacs/4.6.7-gnu-4.9
g_luck
Submit the job using
qsub -q short script.sh
Take a note of your jobID.
Killing a Job
Let's assume you build a Homer and want to stop/kill/remove a running job.
qdel <jobID>
Fancy Script Job
#PBS -l nodes=1:ppn=28:gpus=4:exclusive_process
#PBS -l walltime=36:00:00
#PBS -S /bin/bash
#PBS -N Gromacs_GPU
#PBS -j oe
#PBS -o LOG
#PBS -n
module purge
module load devel/cuda/7.5 mpi/mvapich2/2.1-gnu-4.9 numlib/openblas/0.2.18-gnu-4.9
source /opt/bwhpc/common/chem/gromacs/2016_gnu-4.9/bin/GMXRC.bash
cd $PBS_O_WORKDIR
gmx grompp -f NPT.mdp -c protein.pdb -n index.ndx -p topol.top
mdrun_s_gpu -v -deffnm NPT_protein -pin on -ntmpi 4 -ntomp 7 -gpu_id 0123 -s topol.tpr
Submit with
qsub -q gpu script.sh
There are tons of options, details and caveats. If there is anything not working, as you like, send an email to [hpcmaster@uni-tuebingen.de hpcmaster].