Using NYU hpc Server

Software: iTerm2 and FileZilla

iTerm2 Log In:

ssh yf31@prince.hpc.nyu.edu

Starting an interactive session

srun --pty /bin/bash 
## by default the resource allocated is single CPU core and 2GB memory for 1 hour

srun -n4 -t2:00:00 --mem=4000 --pty /bin/bash
## You can request resources for an interactive batch session, for example to request 4 processors with 4GB memory for 2 hours:
## To leave an interactive batch session, type exit at the command prompt.

Finding the available modules

module avail

Load a module (e.g, R)

module load r/gnu/3.5.1
##My preferred option as some packages can only be installed under this gnu option

module load r/intel/3.6.0

module load matlab/2020a

Submitting a batch job

cd /scratch/yf31
##Use this folder as it has more resources

Transfering data from dropbox to nyu hpc

rclone copy dropbox:Projects/XXX /home/yf31/YYY 

File tranfer using File Zilla

  • Need to connect to NYU VPN first
  • Suggestion: use /home/yf31 folder to store the scripts and use /scratch/yf31 to store the program output
  • REASON: The scratch folder is not backed-up, but has a larger quota. The home folder is back-up but has only 20GB quota.
  • For each project, create a folder under /home and another folder under /scratch.

Sample .sh file for Matlab

#!/bin/bash
#
#SBATCH --job-name=ex_matlab
#SBATCH --nodes=1
#SBATCH --tasks-per-node=1
#SBATCH --mem=2GB
#SBATCH --time=01:00:00
#SBATCH --output=/scratch/yf31/masp/output/slurm_%j.out
module load matlab/2020a

matlab -nodisplay -r "main_MASP_CS_v1(${SLURM_ARRAY_TASK_ID})" # > /scratch/yf31/masp/${SLURM_ARRAY_JOB_ID}_${SLURM_ARRAY_TASK_ID}.txt

Sample .sh file for R

#!/bin/bash
#
#SBATCH --job-name=ex_R
#SBATCH --nodes=1
#SBATCH --tasks-per-node=1
#SBATCH --mem=2GB
#SBATCH --time=01:00:00
#SBATCH --output=/scratch/yf31/masp/output/slurm_%j.out
module load r/gnu/3.5.1

R CMD BATCH --no-save --vanilla ex.R /scratch/yf31/masp/${SLURM_ARRAY_JOB_ID}_${SLURM_ARRAY_TASK_ID}.txt

Related