Using NYU hpc Server

Software: iTerm2 and FileZilla

iTerm2 Log In:


Starting an interactive session

srun --pty /bin/bash 
## by default the resource allocated is single CPU core and 2GB memory for 1 hour

srun --nodes=1 --cpus-per-task=16 --mem=16GB --time=04:00:00 --pty /bin/bash
## You can request resources for an interactive batch session, for example to request 4 processors with 4GB memory for 2 hours:
## To leave an interactive batch session, type exit at the command prompt.

Finding the available modules

module avail

Load a module (e.g, R)

module load r/gcc/4.0.3
##Some packages can only be installed under this gnu option

module load r/intel/4.0.4

module load matlab/2020a

Submitting a batch job

cd /scratch/yf31
##Use this folder as it has more resources

Transfering data from dropbox to nyu hpc

rclone copy dropbox:Projects/XXX /home/yf31/YYY 

File tranfer using File Zilla

  • Need to connect to NYU VPN first
  • Suggestion: use /home/yf31 folder to store the scripts and use /scratch/yf31 to store the program output
  • REASON: The scratch folder is not backed-up, but has a larger quota. The home folder is back-up but has only 20GB quota.
  • For each project, create a folder under /home and another folder under /scratch.

Sample .sh file for Matlab

#SBATCH --job-name=ex_matlab
#SBATCH --nodes=1
#SBATCH --tasks-per-node=1
#SBATCH --mem=2GB
#SBATCH --time=01:00:00
#SBATCH --output=/scratch/yf31/masp/output/slurm_%j.out
module load matlab/2020a

matlab -nodisplay -r "main_MASP_CS_v1(${SLURM_ARRAY_TASK_ID})" # > /scratch/yf31/masp/${SLURM_ARRAY_JOB_ID}_${SLURM_ARRAY_TASK_ID}.txt

Sample .sh file for R

#SBATCH --job-name=ex_R
#SBATCH --nodes=1
#SBATCH --tasks-per-node=1
#SBATCH --mem=2GB
#SBATCH --time=01:00:00
#SBATCH --output=/scratch/yf31/masp/output/slurm_%j.out
module load r/gnu/3.5.1

R CMD BATCH --no-save --vanilla ex.R /scratch/yf31/masp/${SLURM_ARRAY_JOB_ID}_${SLURM_ARRAY_TASK_ID}.txt

Two-step authorization on .ssh to avoid entering password everytime

Step 1: Create public and private keys using ssh-key-gen on local-host


Step 2: Copy the public key to remote-host using ssh-copy-id

ssh-copy-id -i ~/.ssh/ remote-host