Cluster: Difference between revisions

From RedwoodCenter
Jump to navigationJump to search
 
No edit summary
Line 1: Line 1:
== General Information ==
=== home directory quota ===
There is a 10GB quota limit enforced on $HOME directory
(/global/home/users/username) usage. Please keep your usage below
this limit. There will NETAPP snapshots in place in this file
system so we suggest you store only your source code and scripts
in this area and store all your data under /clusterfs/cortex
(see below).
=== data ===
For large amounts of data, please create a directory
  /clusterfs/cortex/scratch/username
and store the data inside that directory.
== Connect ==
== Connect ==



Revision as of 07:25, 21 July 2009

General Information

home directory quota

There is a 10GB quota limit enforced on $HOME directory (/global/home/users/username) usage. Please keep your usage below this limit. There will NETAPP snapshots in place in this file system so we suggest you store only your source code and scripts in this area and store all your data under /clusterfs/cortex (see below).

data

For large amounts of data, please create a directory

 /clusterfs/cortex/scratch/username

and store the data inside that directory.

Connect

get a password

  • press the PASS WORD button on your crypto card
  • enter passoword
  • press enter
  • the 7 digit password is given (without the dash)

setup environment

  • put all your customizations into your .bashrc
  • for login shells, .bash_profile is used, which in turn loads .bashrc

ssh to the gateway computer (hadley)

note: please don't use the gateway for computations (e.g. matlab)!

 ssh -Y neuro-calhpc.berkeley.edu (or hadley.berkeley.edu) 

and use your crypto password

Useful commands

Start interactive session on compute node

  • start interactive session:
 qsub -X -I
  • start interactive session on particular node (nodes n0000.cortex and n0001.cortex have GPUs):
 qsub -X -I -l nodes=n0001.cortex

Perceus commands

The perceus manual is here

  • listing available cluster nodes:
 wwstats
  • list cluster usage
 wwtop
  • to restrict the scope of these commands to cortex cluster, add the following line to your .bashrc
 export NODES='*cortex'
  • module list
  • module avail
  • module help


Resource Manager PBS

  • Job Scheduler MOAB
  • List running jobs:
 qstat -a
  • List jobs of a given node:
 qstat -n 98
  • sample script
 #!/bin/bash
 
 #PBS -q cortex
 #PBS -l nodes=1:ppn=2:cortex
 #PBS -l walltime=01:00:00
 #PBS -o path-to-output
 #PBS -e path-to-error
 cd /global/home/users/kilian/sample_executables
 cat $PBS_NODEFILE
 mpirun -np 8 /bin/hostname
 sleep 60
  • submit script
 qsub scriptname
  • interactive session
 qsub -I -q cortex -l nodes=1:ppn=2:cortex -l walltime=00:15:00
  • list nodes that your job is running on
 cat $PBS_NODEFILE
  • run the program on several cores
 mpirun -np 4 -mca btl ^openib sample_executables/mpi_hello

Matlab

note: remember to start an interactive session before starting matlab!

We don't currently have a proper Matlab installation. However, you can run an old version by appending the following to the .bashrc file in your home directory:

 export PATH=$PATH:/global/home/users/jack/matlab74/bin

Sage

I've installed sage 4.02 in ~amirk/sage. Sage is http://sagemath.org.

Sample pbs and mpi script is here:

 ~amirk/test

You can run it as:

 % mkdir -p ~/jobs
 % cd ~amirk/test
 % qsub pbs

In your interactive session, if you want to have a scipy environment (run ipython, etc), first do:

 % ~amirk/sage/sage -sh

then you can run:

 % ipython

or you can just do:

 % ~/amirk/sage/sage -ipython

This is a temporary solution for people wanting use scipy with mpi on the cluster. It was built against the default openmpi (1.2.8) (icc) and mpi4py 1.1.0. For those using hdf5, I also built hdf5 1.8.3 (gcc) and h5py 1.2.

Support Requests

  • If you have a problem that is not covered on this page, you can send an email to our user list:
 redwood_cluster@lists.berkeley.edu
  • If you need additional help from the LBL group, send an email to their email list. Please always cc our email list as well.
 scs@lbl.gov
  • In urgent cases, you can also email Krishna Muriki (LBL User Services) directly.