UMBC logo
UMBC High Performance Computing Facility
Using your HPCF account
The following page gives a tour through a typical maya account. While it is a standard Unix account, there are several special features to note, including the location and intent of the different storage areas and the availability of software. If you're having trouble with any of the material, or believe that your account may be missing something, contact user support.

Connecting to maya

The only nodes with a connection to the outside network are the user nodes. Internally to the system, their full hostnames are maya-usr1.rs.umbc.edu and maya-usr2.rs.umbc.edu (notice the "-usr1" and "-usr2"). From the outside, we must refer to the hostname maya.rs.umbc.edu. To log in to the system, you must use a secure shell like SSH from Unix/Linux, PuTTY from Windows, or similar. You connect to the user node, which is the only node visible from the internet. For example, suppose we're connecting to maya from the Linux machine "linux1.gl.umbc.edu". We will take user "araim1" as our example throughout this page.
araim1@linux1.gl.umbc.edu[16]$ ssh araim1@maya.rs.umbc.edu
WARNING: UNAUTHORIZED ACCESS to this computer is in violation of Criminal
         Law Article section 8-606 and 7-302 of the Annotated Code of MD.

NOTICE:  This system is for the use of authorized users only. 
         Individuals using this computer system without authority, or in
         excess of their authority, are subject to having all of their
         activities on this system monitored and recorded by system
         personnel.


Last login: Mon Mar  3 14:17:05 2014 from ...

  UMBC High Performance Computing Facility	     http://www.umbc.edu/hpcf
  --------------------------------------------------------------------------
  If you have any questions or problems using this system please send mail to 
  hpc-support@lists.umbc.edu.  System technical issues should be reported
  via RT ticket to the "Research Computing" queue at https://rt.umbc.edu/

  Remember that the Division of Information Technology will never ask for
  your password. Do NOT give out this information under any circumstances.
  --------------------------------------------------------------------------

[araim1@maya-usr1 ~]$ 
Replace "araim1" with your UMBC username (that you use to log into myUMBC). You will be prompted for your password when connecting; your password is your myUMBC password. Notice that connecting to maya.rs.umbc.edu puts us on maya-usr1. We may connect to the other user node with the following.
[araim1@maya-usr1 ~]$ ssh maya-usr2
... same welcome message ...
[araim1@maya-usr2 ~]$

As another example, suppose we're SSHing to maya from a Windows machine with PuTTY. When setting up a connection, use "maya.rs.umbc.edu" as the hostname. Once you connect, you will be prompted for your username and password, as mentioned above.

If you intend to do something requiring a graphical interface, such as view plots, then see running X Windows programs remotely.

Available software

Our system is run on Red Hat Enterprise Linux 6. We support only the bash shell. In addition to the software you'd find on a typical Linux system, the following are also available on maya:

We supply three compiler suites:

The command used to compile code depends on the language and compiler used.
Language Intel GNU PGI
C icc gcc pgcc
C++ icpc g++ pgCC
Fortran ifort gfortran pgf77/pgf90/pgf95
We also provide three implementations of Infiniband-enabled MPI: The following table provides the command necessary to compile MPI code for each implementation of MPI in C, C++, and Fortran.
Language Intel MPI MVAPICH2 OpenMPI
C mpiicc mpicc mpicc
C++ mpiicpc mpic++ mpiCC
Fortran mpiifort mpif77/mpif90/mpif95 mpif77/mpif90/mpif95
To access modern architectures we supply: We use the SLURM cluster management and job scheduling system.

See resources for maya for a more complete list of the available software, along with tutorials to help you get started. For more details, Bright computing offers the manual.

Storage areas

The directory structure that DoIT will set up as part of your account creation is designed to facilitate the work of research groups consisting of several users and also reflects the fact that all HPCF accounts must be sponsored by a faculty member at UMBC. This sponsor will be referred to as PI (short for principal investigator) in the following. A user may be a member of one or several research groups on maya. Each research group has several storage areas on the system in the following specified locations. See System Description for a higher level overview of the storage and the cluster architecture.

Note that some special users, such as students in MATH 627, may not belong to a research group and therefore may not have any of the group storage areas set up.

Storage Area Location Description
User Home /home/username/ This is where the user starts after logging in to maya. Only accessible to the user by default. Default size is 100 MB, storage is located on the management node. This area is backed up nightly.
User Workspace Symlink: /home/username/pi_name_user
Mount point: /umbc/lustre/pi_name/users/username/
A central storage area for the user's own data, accessible only to the user and with read permission to the PI, but not accessible to other group members by default. Ideal for storing output of parallel programs, for example. This area is not backed up.
Group Workspace Symlink: /home/username/pi_name_common
Mount point: /umbc/lustre/pi_name/common/
The same functionality and intent for use as user workspace, except this area is accessible with read and write permission to all members of the research group. This area is like the group saved area, except it is larger and not backed up.
Scratch space /scratch/NNNNN Each compute node on the cluster has local /scratch storage. On nodes 1-69 the total scratch space available is 322GB, on nodes 70-155 this is 132GB, and on nodes 156-237 this is 361GB. The space in this area is shared among current users of the node so the total amount available will vary based on system usage. This storage is convenient temporary space to use while your job is running, but note that your files here persist only for the duration of the job. Use of this area is encouraged over /tmp, which is also needed by critical system processes. Note that a subdirectory NNNNN (e.g. 22704) is created for your job by the scheduler at runtime.

For information on how to access scratch space from your job, see the how to run page.

Tmp Space /tmp/ Each machine on the cluster has its own local /tmp storage, as is customary on Unix systems. On all nodes the tmp space available is 25GB. This scratch area is shared with other users, and is purged periodically by the operating system, therefore is only suitable for temporary scratch storage. Use of /scratch is encouraged over /tmp (see above)
AFS /afs/umbc.edu/users/u/s/username/ Your AFS storage is conveniently available on the cluster, but can only be accessed from the user node. The "/u/s" in the directory name should be replaced with the first two letters of your username (for example user "straha1" would have directory /afs/umbc.edu/users/s/t/straha1).
"Mount point" indicates the actual location of the storage on maya's filesystem. Traditionally, many users prefer to have a link to the storage from their home directory for easier navigation. The field "symlink" gives a suggested location for this link. For example, once the link is created, you may use the command "cd ~/pi_name_user" to get to User Workspace for the given PI. These links may be created for users as part of the account creation process; however, if they do not yet exist, simple instructions are provided below to create them yourself.

The amount of space available in the PI-specific areas depend on the allocation given to your research group. Your AFS quota is determined by DoIT. The quota for everyone's home directory is generally the same.

Some research groups have additional storage areas, or have storage organized in a different way than shown above. For more information, contact your PI or user support.

Note that listing the contents of /umbc/lustre may not show storage areas for all PIs. This is because PI storage is only loaded when it is in use. If you attempt to access a PI's subdirectory in /umbc/lustre or /umbc/research, it should be loaded (seamlessly) if it was previously offline.

The tutorial below will walk you through your home directory, and the specialized storage areas on maya.

A brief tour of your account

This section assumes that you already have an account, and you're a member of a research group. If you need to apply for an account, see the account request form. If you're not a member of a research group, you won't have access to the various group spaces.

Home directory

First, log in to maya from your local machine by SSH:
me@mymachine:~> ssh username@maya.rs.umbc.edu
Password: (type your password)
WARNING: UNAUTHORIZED ACCESS to this computer is in violation of Criminal
         Law Article section 8-606 and 7-302 of the Annotated Code of MD.

NOTICE:  This system is for the use of authorized users only. 
         Individuals using this computer system without authority, or in
         excess of their authority, are subject to having all of their
         activities on this system monitored and recorded by system
         personnel.


Last login: Sat Dec  5 01:39:23 2009 from hpc.rs.umbc.edu

  UMBC High Performance Computing Facility	     http://www.umbc.edu/hpcf
  --------------------------------------------------------------------------
  If you have any questions or problems regarding this system, please send
  mail to hpc-support@lists.umbc.edu.

  Remember that the Division of Information Technology will never ask for
  your password. Do NOT give out this information under any circumstances.
  --------------------------------------------------------------------------

[araim1@maya-usr1 ~]$
The Bash shell as the default shell for maya users - this will be the shell that you are assumed to be in for purposes of documentation and examples on this webpage. Check your shell with the command "echo $SHELL" or by using "env" and searching for SHELL in the resulting lines of output.
[araim1@maya-usr1 ~]$ echo $SHELL
/bin/bash
[araim1@maya-usr1 ~]$
At any given time, the directory that you are currently in is referred to as your current working directory. Since you just logged in, your home directory is your current working directory. The "~" symbol is shorthand for your home directory. The program "pwd" tells you the full path of the current working directory, so let's run pwd to see where your home directory really is:
araim1@maya-usr1:~$ pwd
/home/araim1
Now let's use ls to get more information about your home directory.
araim1@maya-usr1:~$ ls -ld ~
drwx------ 23 araim1 pi_nagaraj 4096 Oct 29 22:35 /home/araim1
There is quite a bit of information on this line. If you're not sure of what it means, this would be a good time to find a Linux/Unix reference. One example available on the web is The Linux Cookbook. What we wanted to emphasize was the string of permissions. The string "drwx------" indicates that only you have read, write, or execute access to this directory. (For a directory, "execute" access means ability to browse inside of it). Therefore your home is private. The space in your home directory is limited though; you can see this by using the "quota" command:
[araim1@maya-usr1 ~]$ quota
Disk quotas for user araim1 (uid 28398): 
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
   master:/home   74948  250000  350000            2301   10000   15000 
[araim1@maya-usr1 ~]$ 
These numbers tell you how much space you're using and how much is available. We limit two aspects of your storage: KB of disk space and the number of files you can create. For each of those quantities, you have two limits: a soft limit and a hard limit. When you reach your soft limit, you will have a little while to reduce your usage. If you wait too long before reducing your usage, or if you pass your hard limit, then you won't be able to make more files or enlarge existing files until you delete enough to get under your soft limit. Your hard and soft limits are also referred to as your quotas.
blocks 74948 I am currently using 74948 KB of disk space
quota 250000 My disk space soft limit is 250000 KB
limit 350000 My disk space hard limit is 350000 KB
grace How far over my disk space soft limit I've gone (0 currently since I'm below my soft limit)
files 2301 How many files I have
quota 10000 My soft limit for the number of files I can have
limit 15000 My hard limit for the number of files I can have
grace How far over my soft limit I've gone in terms of the number of files (currently 0 since I'm below my soft limit)
In your home directory, you are only allowed to create up to 10,000 files that take up a total of 250,000 kB of storage space. That isn't much space for high performance computing and so you should plan on using the special storage areas that have been set up for you.

Modules

Modules are a simple way of preparing your environment to use many of the major applications on maya. Modules are normally loaded for the duration of an SSH session. They can be unloaded as well, and can also be set to automatically load each time you log in. The following shows the modules which are loaded for you by default.
[jongraf1@maya-usr1 ~]$ module list
Currently Loaded Modulefiles:
  1) dot                                 7) intel-mpi/64/4.1.3/049
  2) matlab/r2015a                       8) texlive/2014
  3) comsol/4.4                          9) quoter
  4) gcc/4.8.2                          10) git/2.0.4
  5) slurm/14.03.6                      11) default-environment
  6) intel/compiler/64/15.0/2015.1.133
This means that SLURM, GCC, matlab, texlive, comsol and the Intel compiler + Intel MPI implementation are usable by default as soon as you log in. If we wish to use other software such as R (for statistical computing), we must first load the approriate module.
[araim1@maya-usr1 ~]$ Rscript -e 'exp(1)'
-bash: Rscript: command not found
[araim1@maya-usr1 ~]$ module load R/3.0.2 
[araim1@maya-usr1 ~]$ Rscript -e 'exp(1)'
[1] 2.718282
[araim1@maya-usr1 ~]$ 
To use compilers other than default you need to unload and load modules from time to time. If you lost track and want to get back to the default status, try the following command:
[hu6@maya-usr1 ~]$ module purge
[hu6@maya-usr1 ~]$ module load default-environment
More information on modules is available here.

Group membership

Your account has membership in one or more Unix groups. On maya, groups are usually (but not always) organized by research group and named after the PI. The primary purpose of these groups is to facilitate sharing of files with other users, through the Unix permissions system. To see your Unix groups, try the following command:
[araim1@maya-usr1 ~]$ groups
pi_nagaraj contrib alloc_node_ssh hpcreu pi_gobbert
[araim1@maya-usr1 ~]$ 
In the example above, the user is a member of five groups - two of them correspond to research groups.

Special storage areas

A typical account on maya will have access to several central storage areas. These can be classified as "not backed up". They can also be classified as "user" or "group" storage. See above for the complete descriptions. For each research group, you should have access to the following areas:
[araim1@maya-usr1 ~]$ ls -d /umbc/lustre/nagaraj/users/
/umbc/lustre/nagaraj/users/
[araim1@maya-usr1 ~]$ ls -d /umbc/lustre/nagaraj/common/
/umbc/lustre/nagaraj/common/
[araim1@maya-usr1 ~]$ 
We recommend creating the following symlinks to your home directory for easier navigation.
araim1@maya-usr1:~$ ls -l ~/nagaraj_common ~/nagaraj_user ~/nagaraj_saved
lrwxrwxrwx 1 araim1 pi_nagaraj 33 Jul 29 15:48 nagaraj_common -> /umbc/lustre/nagaraj/common
lrwxrwxrwx 1 araim1 pi_nagaraj 33 Jul 29 15:48 nagaraj_user -> /umbc/lustre/nagaraj/users/araim1
If any of these do not exist, you may create them using the following commands. You only need to do this once. We suggest that you repeat it for each PI if you are a member of multiple research groups.
[araim1@maya-usr1 ~]$ ln -s /umbc/lustre/nagaraj/common ~/nagaraj_common
[araim1@maya-usr1 ~]$ ln -s /umbc/lustre/nagaraj/users/ ~/nagaraj_user
[araim1@maya-usr1 ~]$
In the "ls" command output, we see that these are symbolic links instead of normal directories. Whenever you access "/home/araim1/nagaraj_common", you are actually redirected to "/umbc/lustre/nagaraj/common". If the link "/home/araim1/nagaraj_common" is removed, the actual directory "/umbc/lustre/nagaraj/common" is not affected. Note that certain research groups may need different links than the (standard) ones shown. Check with your PI.

Group Workspace

The intention of Group Workspace is to store reasonably large volumes of data, such as large datasets from computations, which can be accessed by everyone in your group. By default, the permissions of Group Workspace are set as follows to enable sharing among your group
araim1@maya-usr1:~$ ls -ld /umbc/lustre/nagaraj/common
drwxrws--- 2 pi_nagaraj pi_nagaraj 2 Jul 29 14:56 /umbc/lustre/nagaraj/common/
The string "drwxrws---" indicates that the PI, who is the owner of the group, has read, write, and execute permissions in this directory. In addition, other members of the group also have read, write, and execute permissions. The "s" indicates that all directories created under this directory should inherit the same group permissions. (If this attribute were set but execute permissions were not enabled for the group, this would be displayed as a capital letter "S").

User Workspace

Where Group Workspace is intended as an area for collaboration, User Workspace is intended for individual work. Again, it is intended to store reasonably large volumes of data. Your PI and other group members can see your work in this area, but cannot edit it.
araim1@maya-usr1:~$ ls -ld /umbc/lustre/nagaraj/users/araim1
drwxr-sr-x 3 araim1 pi_nagaraj 3 Sep 21 21:59 /umbc/lustre/nagaraj/users/araim1
The string "drwxr-sr-x", means that only you may make changes inside this directory, but anyone in your group can list or read the contents. Other users appear to also have this access, but they are restricted further up the directory tree from accessing your PI's storage
araim1@maya-usr1:~$ ls -ld /umbc/lustre/nagaraj/
drwxrws--- 3 pi_nagaraj pi_nagaraj 3 Sep 21 21:59 /umbc/lustre/nagaraj/

Checking disk usage vs. storage limits

There are two types of storage limits to be aware of: quotas and physical limits of the filesystems where your space is hosted. The following command will check the space on User Workspace, and Group Workspace:
[araim1@maya-usr1 ~]$ df -h ~/ ~/nagaraj_user ~/nagaraj_common
Filesystem            Size  Used Avail Use% Mounted on
mgt-ib:/home           95G   20G   71G  22% /home
rstor1-ib:/export/nagaraj
                      100G  493M  100G   1% /umbc/lustre/nagaraj
rstor1-ib:/export/nagaraj
                      100G  493M  100G   1% /umbc/lustre/nagaraj
[araim1@maya-usr1 ~]$
Of course your output will depend on which research group(s) you are a member. The first column indicates where the data is physically stored. For example the filesystem "rstor1-ib:/export/nagaraj" means that the data is stored on remote machine "rstor1-ib", in the directory "/export/nagaraj". In the last column, "/umbc/lustre/nagaraj" indicates where this storage is mounted on the local machine. If you want to check the overal usage for the /umbc/lustre storage, follow the example below:
[hu6@maya-usr1 ~]$ df -h | grep gobbert
rstor1-ib:/export/gobbert
                      5.0T  3.1T  2.0T  61% /umbc/lustre/gobbert
When using the quota command the "-g" option will display quotas associated with your group membership(s).

To check quota for lustre storage space use the following command:

[hu6@maya-usr1 ~]$ lfs quota /umbc/lustre/
Disk quotas for user hu6 (uid 99429):
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
  /umbc/lustre/ 3167825540       0       0       -  657029       0       0       -
Disk quotas for group pi_gobbert (gid 32296):
     Filesystem  kbytes   quota   limit   grace   files   quota   limit   grace
  /umbc/lustre/ 13509401064* 10000000000 15000000000    none 7379141       0       0       -
[hu6@maya-usr1 ~]$ 

For tips on managing your disk space usage, see How to check disk usage

More about permissions

Standard Unix permissions are used on maya to control which users have access to your files. We've already seen some examples of this. It's important to emphasize that this is the mechanism that determines the degree of sharing, and on the other hand privacy, of your work on this system. In setting up your account, we've taken a few steps to simplify things, assuming you use the storage areas for the basic purposes they were designed. This should be sufficient for many users, but you can also customize your use of the permissions system if you need additional privacy, to share with additional users, etc.

Changing a file's permissions

For existing files and directories, you can modify permissions with the "chmod" command. As a very basic example:
[araim1@maya-usr1 ~]$ touch tmpfile
[araim1@maya-usr1 ~]$ ls -la tmpfile 
-rwxrwxr-x 1 araim1 pi_nagaraj 0 Jun 14 17:50 tmpfile
[araim1@maya-usr1 ~]$ chmod 664 tmpfile 
[araim1@maya-usr1 ~]$ ls -la tmpfile 
-rw-rw-r-- 1 araim1 pi_nagaraj 0 Jun 14 17:50 tmpfile
[araim1@maya-usr1 ~]$ 
See "man chmod" for more information, or the Wikipedia page for chmod

Changing a file's group

For users in multiple groups, you may find the need to change a file's ownership to a different group. This can be accomplished on a file-by-file basis by the "chgrp" command
[araim1@maya-usr1 ~]$ touch tmpfile 
[araim1@maya-usr1 ~]$ ls -la tmpfile 
-rw-rw---- 1 araim1 pi_nagaraj 0 Jun 14 18:00 tmpfile
[araim1@maya-usr1 ~]$ chgrp pi_gobbert tmpfile 
[araim1@maya-usr1 ~]$ ls -la tmpfile 
-rw-rw---- 1 araim1 pi_gobbert 0 Jun 14 18:00 tmpfile
[araim1@maya-usr1 ~]$ 
You may also change your "currently active" group using the "newgrp" command
[araim1@maya-usr1 ~]$ id
uid=28398(araim1) gid=1057(pi_nagaraj) groups=1057(pi_nagaraj),32296(pi_gobbert)
[araim1@maya-usr1 ~]$ newgrp pi_gobbert
[araim1@maya-usr1 ~]$ id
uid=28398(araim1) gid=32296(pi_gobbert) groups=1057(pi_nagaraj),32296(pi_gobbert)
Now any new files created in this session will belong to the group pi_gobbert
[araim1@maya-usr1 ~]$ touch tmpfile2
[araim1@maya-usr1 ~]$ ls -la tmpfile2 
-rw-rw---- 1 araim1 pi_gobbert 0 Jun 14 18:05 tmpfile2
[araim1@maya-usr1 ~]$ 

Umask

By default, your account will have a line in ~/.bashrc which sets your "umask"
umask 007
The umask is traditionally set to 022 on Unix systems, so this is a customization on maya. The umask helps to determine the permissions for new files and directories you create. Usually when you create a file, you don't specify what its permissions will be. Instead some defaults are used, but they may be too liberal. For example, suppose we created a file that got the following default permissions.
[araim1@maya-usr1 ~]$ ls -la secret-research.txt
-rwxrwxr-x 1 araim1 pi_nagaraj 0 Jun 14 17:02 secret-research.txt
All users on the system could read this file if they had access to its directory. The umask allows us to turn off specific permissions for newly created files. Suppose we want all new files to have "rwx" turned off for anyone who isn't us (araim1) or in our group (pi_nagaraj). A umask setting of "007" accomplishes this. To quickly illustrate what this means, notice that 007 is three digit number in octal (base 8). We can also represent it as a 9 digit binary number 000000111. We can also represent "rwxrwxr-x" (from our file above) as a 9 digit binary number 111111101; dashes correspond to 0's and letters correspond to 1's. The umask is applied the following way to set the new permissions for our file
        111111101    <-- proposed permissions for our new file
AND NOT(000000111)   <-- the mask
------------------
=       111111000
=       rwxrwx---    <-- permissions for our new file
In other words, umask 007 ensures that outside users have no access to your new files. See the Wikipedia entry for umask for more explanation and examples. On maya, the storage areas' permissions are already set up to enforce specific styles of collaboration. We've selected 007 as the default umask to not prevent sharing with your group, but to prevent sharing with outside users. If you generally want to prevent your group from modifying your files (for example), even in the shared storage areas, you may want to use a more restrictive umask.

If you have any need to change your umask, you can do so permanently by editing ~/.bashrc, or temporarily for the current SSH session by using the umask command directly.

[araim1@maya-usr1 ~]$ umask
0007
[araim1@maya-usr1 ~]$ umask 022
[araim1@maya-usr1 ~]$ umask
0022
[araim1@maya-usr1 ~]$ 
Notice that typing "umask" with no arguments reports your current umask setting.

Configuring permissions of Group storage areas (PI only)

If you are a PI, you can add or remove the group write permissions (the w in r-s/rws) by using the chmod command. You may want to do this if you intend to place materials here for your group to read, but not for editing. To add group write permissions and let all members of your group create or delete files and directories in your Group Workspace area in a directory called restricted_permission.
araim1@maya-usr1:~$ chmod g+w ~/nagaraj_common/restricted_permission
To remove group write permissions so that only araim1 and the PI nagaraj can create or delete files in the directory:
araim1@maya-usr1:~$ chmod g-w ~/nagaraj_common/restricted_permisson

AFS Storage

Your AFS partition is the directory where your personal files are stored when you use the DoIT computer labs or the gl.umbc.edu systems. You can access this partition from maya. In order to access AFS, you need an AFS token. You can see whether you currently have an AFS token
straha1@maya-usr1:~> tokens

Tokens held by the Cache Manager:

Tokens for afs@umbc.edu [Expires Oct 25 00:16]
   --End of list--
The "Tokens for afs@umbc.edu" line tells me that we currently have tokens that let us access UMBC's AFS storage. The expiration date ("Expires Oct 25 00:16") tells us when our tokens will expire. When your tokens expire, an empty list will be returned
straha1@maya-usr1:~> tokens

Tokens held by the Cache Manager:

   --End of list--
We can renew our tokens using the "kinit" and "aklog" commands as follows. Note that kinit is asking for your MyUMBC password.
[araim1@maya-usr1 ~]$ kinit
Password for araim1@UMBC.EDU: 
[araim1@maya-usr1 ~]$ aklog
[araim1@maya-usr1 ~]$ tokens

Tokens held by the Cache Manager:

User's (AFS ID 28398) tokens for afs@umbc.edu [Expires Apr  4 05:57]
   --End of list--
[araim1@maya-usr1 ~]$
The "kinit" command may only be necessary for SSH sessions using public key / private pair, where typing of the password is bypassed at login time.

How to create simple files and directories

Now let's try creating some files and directories. First, let's make a directory named "testdir" in your home directory.
araim1@maya-usr1:~$ mkdir testdir
araim1@maya-usr1:~$ ls -ld testdir
drwxr-x--- 2 araim1 nagaraj 4096 Oct 30 00:12 testdir
araim1@maya-usr1:~$ cd testdir
araim1@maya-usr1:~/testdir$
The mkdir command created the directory testdir. Since your current working directory was ~ when you ran that command, testdir is inside your home directory. Thus it is said to be a subdirectory of ~. The cd command changed your working directory to ~/testdir and that is reflected by the new prompt: araim1@maya-usr1:~/testdir$. Now lets create a file in testdir:
araim1@maya-usr1:~/testdir$ echo HELLO WORLD > testfile
araim1@maya-usr1:~/testdir$ ls -l testfile
-rw-r----- 1 araim1 pi_groupname 12 Oct 30 00:16 testfile
araim1@maya-usr1:~/testdir$ cat testfile
HELLO WORLD
araim1@maya-usr1:~/testdir$ cat ~/testdir/testfile
HELLO WORLD
araim1@maya-usr1:~/testdir$
The echo command simply prints out its arguments ("HELLO WORLD"). The ">" tells your shell to send the output of echo into the file testfile. Since your current working directory is ~/testdir, testfile was created in testdir and its full path is then ~/testdir/testfile. The program cat prints (aka concatenates) out the contents of a file (where the argument to cat, testfile or ~/testdir/testfile is the file to print out). As you can see, testfile does indeed contain "HELLO WORLD". Now let's delete testdir and testfile. To use the "rmdir" command and remove our directory, we must first ensure that it is empty:
araim1@maya-usr1:~/testdir$ rm -i testfile
rm: remove regular file `testfile'? y
Now we delete the testdir directory with rmdir:
araim1@maya-usr1:~/testdir$ cd ~
araim1@maya-usr1:~$ rmdir testdir

How to copy files to and from maya

Probably the most general way to transfer files between machines is by Secure Copy (scp). Because some remote filesystems may be mounted to maya, it may also be possible to transfer files using "local" file operations like cp, mv, etc.

Method 1: Secure Copy (scp)

The maya cluster only allows secure connection from the outside. Secure Copy is the file copying program that is part of Secure Shell (SSH). To transfer files to and from maya, you must use scp or compatible software (such as WinSCP or SSHFS). On Unix machines such as Linux or MacOS X, you can execute scp from a terminal window. Let's explain the use of scp by the following example: user "araim1" has a file hello.c in sub-directory math627/hw1 from his home directory on maya. To copy the file to the current directory on another Unix/Linux system with scp, use
[araim1@maya-usr1 ~]$ scp araim1@maya.rs.umbc.edu:~/math627/hw1/hello.c . 
Notice carefully the period "." at the end of the above sample command; it signifies that you want the file copied to your current directory (without changing the name of the file). You can also send data in the other direction too. Let's say you have a file /home/bob/myfile on your machine and you want to send it to a subdirectory "work" of your your maya home directory:
[araim1@maya-usr1 ~]$ scp /home/bob/myfile araim1@maya.rs.umbc.edu:~/work/
The "/" after "work" ensures that scp will fail if the directory "work" does not exist. If you leave out the "/" and "work" was not a directory already, then scp would create a file "work" that contains the contents of /home/bob/myfile (which is not what we want). You may also specify a different name for the file at its remote destination.
[araim1@maya-usr1 ~]$ scp /home/bob/myfile araim1@maya.rs.umbc.edu:~/work/myfile2

As with SSH, you can leave out the "araim1@", if your username is the same on both machines. That is the case on the GL login servers and the general lab Mac OS X and Linux machines. If you issue the command from within UMBC, you can also abbreviate the machine name to maya.rs. See the scp manual page for more information. You can access the scp manual page (referred to as a "man page") on a unix machine by running the command:

man scp

Method 2: AFS

Another way to copy data is to use the UMBC-wide AFS filesystem. The AFS filesystem is where your UMBC GL data is stored. That includes your UMBC email, your home directory on the gl.umbc.edu login servers and general lab Linux and Mac OS X machines, your UMBC webpage (if you have one) and your S: and some other drives on the general lab windows machines. Any data you put in your AFS partition will be available on maya in the directory /afs/umbc.edu/users/a/r/araim1/ where "araim1" should be replaced with your username, and "a" and "r" should be replaced with the first and second letters of your username, respectively. As an example, suppose you're using a Mac OS X machine in a UMBC computer lab and you've SSHed into maya in a terminal window. Then, in that window you can type:
[araim1@maya-usr1 ~]$ cp ~/myfile /afs/umbc.edu/users/a/r/araim1/home/
and your file myfile in your maya home directory will be copied to myfile in your AFS home directory. Then, you can access that copy of the file on the Mac you're using, via ~/myfile. Note that it's only a copy of the file; ~/myfile on your Mac is not the same file as ~/myfile on maya. However, ~/myfile on your Mac is the same as /afs/umbc.edu/users/a/r/araim1/home/myfile on both your Mac and maya.

Make sure you've noted the section on AFS tokens above if you plan on using the AFS mount.

How to use the queuing system

See our How to compile C programs tutorial to learn how to run both serial and parallel programs on the cluster.

Things to check on your new maya account

Please run the following command to check your bachrc file:

[hu6@maya-usr1 ~]$ more .bashrc
You should have output to your screen like this:
# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

# User specific aliases and functions
umask 007


# Load modules needed for the maya system
if [ -e /cm ]; then
module load default-environment
fi
export SQUEUE_FORMAT="%.7i %.9P %.8j %.8u %.2t %.10M %.6D %7q %R"
Please check that your umask is 007 and that you have the default-environment loaded. With the default environment, the following modules are ready for use:
[jongraf1@maya-usr1 ~]$ module list
Currently Loaded Modulefiles:
  1) dot                                 7) intel-mpi/64/4.1.3/049
  2) matlab/r2015a                       8) texlive/2014
  3) comsol/4.4                          9) quoter
  4) gcc/4.8.2                          10) git/2.0.4
  5) slurm/14.03.6                      11) default-environment
  6) intel/compiler/64/15.0/2015.1.133

Commands for HPC system monitoring

The following commands can be used for monitoring various aspects of the cluster. Descriptions and examples for the commands are given below.

The command hpc_jobs displays a network map of the number of jobs running on the nodes.

[schou@maya-usr1 ~]$ hpc_jobs
UMBC HPC Job Count at Sat Feb 28 08:47:23 EST 2015
   n1-3  [   0   0   1 ]     n70-73  [   0   2   1   1 ]   n154-157  [   0   0   0   0 ]
   n4-6  [   1   1   0 ]     n74-77  [   4   3   4   1 ]   n158-161  [   0   0   0   0 ]
   n7-9  [   0   0   0 ]     n78-81  [   0   4   5   3 ]   n162-165  [   0   0   0   0 ]
 n10-12  [   0   1   0 ]     n82-85  [   2   1   0   1 ]   n166-169  [   0   0   0   0 ]
 n13-15  [   0   0   0 ]     n86-89  [   0   0   1   0 ]   n170-173  [   0   0   2   0 ]
 n16-18  [   0   0   1 ]     n90-93  [   1   8   1   3 ]   n174-177  [   0   0   0   0 ]
 n19-21  [   0   0   0 ]     n94-97  [   3   2   2   1 ]   n178-181  [   0   0   0   0 ]
 n22-24  [   0   0   0 ]    n98-101  [   5   2   2   1 ]   n182-185  [   0   0   0   0 ]
 n25-27  [   0   0   0 ]   n102-105  [   4   4   5   5 ]   n186-189  [   0   0   0   2 ]
 n28-30  [   0   4   1 ]   n106-109  [   1   0   6   6 ]   n190-193  [   0   0   0   0 ]
 n31-33  [   1   1   2 ]   n110-113  [   3   3   0   0 ]   n194-197  [   0   0   0   0 ]
 n34-36  [   0   2   2 ]   n114-117  [   6   1   1   0 ]   n198-201  [   0   0   0   0 ]
 n37-39  [   0   2   2 ]   n118-121  [   1   0   1   0 ]   n202-205  [   2   0   0   0 ]
 n40-42  [   0   0   0 ]   n122-125  [   0   0   1   0 ]   n206-209  [   0   0   0   0 ]
 n43-45  [   0   0   0 ]   n126-129  [   0   0   0   0 ]   n210-213  [   0   0   0   0 ]
 n46-48  [   0   0   0 ]   n130-133  [   0   0   0   0 ]   n214-217  [   0   0   0   0 ]
 n49-51  [   0   0   0 ]   n134-137  [   0   0   0   0 ]   n218-221  [   0   2   1   0 ]
 n52-54  [   0   0   0 ]   n138-141  [   0   0   0   0 ]   n222-225  [   0   0   0   0 ]
 n55-57  [   0   0   0 ]   n142-145  [   0   0   0   0 ]   n226-229  [   0   0   0   0 ]
 n58-60  [   2   2   0 ]   n146-149  [   0   0   0   0 ]   n230-233  [   0   0   0   0 ]
 n61-63  [   1   1   1 ]   n150-153  [   0   0   0   0 ]   n234-237  [   0   0   0   0 ]
 n64-66  [   1   1   1 ]
 n67-69  [   1   1   1 ]      usr1-2 [   0   0 ]                 mgt [   0 ]
Load1   TFlops  Occupancy
16.0    6.6     29.6

The command hpc_load displays a 'heat map' of where the highest load systems are in maya.

[schou@maya-usr1 ~]$ hpc_load
UMBC HPC Load1 (%) at Sat Feb 28 08:45:33 EST 2015
   n1-3  [   0   0   0 ]     n70-73  [   0  25  12  13 ]   n154-157  [   0   0   0   0 ]
   n4-6  [   6   6   0 ]     n74-77  [  50  38  50  12 ]   n158-161  [   0   0   0   0 ]
   n7-9  [   1   0   1 ]     n78-81  [  12  50  62  38 ]   n162-165  [   0   0   0   0 ]
 n10-12  [   0   6   0 ]     n82-85  [  25  13   0  12 ]   n166-169  [   0   0   0   0 ]
 n13-15  [   1   1   0 ]     n86-89  [   0   0  12   1 ]   n170-173  [   0   0  50   0 ]
 n16-18  [   0   0  26 ]     n90-93  [  12 100   0  25 ]   n174-177  [   0   0   0   0 ]
 n19-21  [   0   0   0 ]     n94-97  [  25  25  25  12 ]   n178-181  [   0   0   0   0 ]
 n22-24  [   0   0   0 ]    n98-101  [  62  25  25  25 ]   n182-185  [   0   0   0   0 ]
 n25-27  [   0   0   1 ]   n102-105  [  50  50  62  62 ]   n186-189  [   0   0   0  12 ]
 n28-30  [   0  10 100 ]   n106-109  [   0   0  75  75 ]   n190-193  [   0   0   0   0 ]
 n31-33  [ 100 100  35 ]   n110-113  [  38  38   0   0 ]   n194-197  [   0   0   0   0 ]
 n34-36  [   0  20  29 ]   n114-117  [  75   0   0   0 ]   n198-201  [   0   0   0   0 ]
 n37-39  [   0  20  41 ]   n118-121  [   0   0 815   0 ]   n202-205  [  25   0   0   0 ]
 n40-42  [   0   0   0 ]   n122-125  [   0   0   0   0 ]   n206-209  [   0   0   0   0 ]
 n43-45  [   0   0   0 ]   n126-129  [   0   0   0   0 ]   n210-213  [   0   0   0   0 ]
 n46-48  [   1   0   0 ]   n130-133  [   0   0   0   0 ]   n214-217  [   0   0   0   0 ]
 n49-51  [   1   0   1 ]   n134-137  [   0   0   0   0 ]   n218-221  [   0  12  12   0 ]
 n52-54  [   0   0   0 ]   n138-141  [   0   0   0   1 ]   n222-225  [   0   0   0   0 ]
 n55-57  [   0   1   1 ]   n142-145  [   0   0   0   0 ]   n226-229  [   0   0   0   0 ]
 n58-60  [  19  41   0 ]   n146-149  [   0   0   0   0 ]   n230-233  [   0   0   0   0 ]
 n61-63  [ 100 100 100 ]   n150-153  [   0   0   0   0 ]   n234-237  [   0   0   0   0 ]
 n64-66  [ 100 100 100 ]
 n67-69  [ 101 100 100 ]      usr1-2 [   1   1 ]                 mgt [ 128 ]
Load1   TFlops  Occupancy
16.1    6.6     30.0

The command hpc_mem maps the memory usage on maya.

[schou@maya-usr1 ~]$ hpc_mem
UMBC HPC Memory Use (%) at Sat Feb 28 08:48:09 EST 2015
   n1-3  [   0   0   2 ]     n70-73  [   3  17  16   5 ]   n154-157  [   2   2   3  16 ]
   n4-6  [  26  57  17 ]     n74-77  [   9   2  25   7 ]   n158-161  [  15  14  38   1 ]
   n7-9  [  16  15  16 ]     n78-81  [   5   7   5  27 ]   n162-165  [   1   1   1   1 ]
 n10-12  [  10  15   9 ]     n82-85  [  11   4   1  14 ]   n166-169  [   1   1   1   1 ]
 n13-15  [   2   3  18 ]     n86-89  [   4   5   5   5 ]   n170-173  [   1   1  16   1 ]
 n16-18  [   2   9   4 ]     n90-93  [   5  33   9   4 ]   n174-177  [   1   1   1   1 ]
 n19-21  [   3   3  30 ]     n94-97  [  12  16   5   4 ]   n178-181  [   1   1   1   1 ]
 n22-24  [   1   2   3 ]    n98-101  [   7  23  13   2 ]   n182-185  [   1   1   1   1 ]
 n25-27  [   2   2   2 ]   n102-105  [   3  19  12   6 ]   n186-189  [   1   1   1  21 ]
 n28-30  [   2  17   8 ]   n106-109  [   1   1   4  12 ]   n190-193  [   1   1   1   1 ]
 n31-33  [   8   8  11 ]   n110-113  [   6   5   3   0 ]   n194-197  [   1   1   3   1 ]
 n34-36  [   1  13  11 ]   n114-117  [  18  16   4   5 ]   n198-201  [   1   1   1   1 ]
 n37-39  [   7  14  10 ]   n118-121  [   3   3  57   1 ]   n202-205  [  27   1   1   1 ]
 n40-42  [   7   1   1 ]   n122-125  [   1   1  18   1 ]   n206-209  [   1   1   1   1 ]
 n43-45  [   1   1   1 ]   n126-129  [   1   1   1   4 ]   n210-213  [   1   1   1   1 ]
 n46-48  [   1   1   1 ]   n130-133  [  16   7   4   5 ]   n214-217  [   1   1   1   1 ]
 n49-51  [   1   1   1 ]   n134-137  [   4   3   1   4 ]   n218-221  [   1  11  11   1 ]
 n52-54  [   1   1   2 ]   n138-141  [   3   1   8   5 ]   n222-225  [   1   1   1   1 ]
 n55-57  [   3   3   3 ]   n142-145  [   1   4   3  10 ]   n226-229  [   1   1   1   1 ]
 n58-60  [  14  11   7 ]   n146-149  [   5  13   2  10 ]   n230-233  [   1   1   1   1 ]
 n61-63  [   8   8   8 ]   n150-153  [   2   2   6   2 ]   n234-237  [   1   3   3   1 ]
 n64-66  [   8   8   8 ]
 n67-69  [   8   8   8 ]      usr1-2 [   6   2 ]                 mgt [  14 ]
TotalTB Active  Use%
8.42    0.53    5.92

The command hpc_ping displays the inter-connect round-trip IP latency time in microseconds

UMBC HPC IB Ping Time to Master.ib (μs) at Mon Mar  2 11:41:22 EST 2015
   n1-3  [ 140 186  92 ]     n70-73  [ 152 145 187 145 ]   n154-157  [ 446 160 139 132 ]
   n4-6  [ 122 115 120 ]     n74-77  [ 163 612 144 157 ]   n158-161  [ 257 220 143 159 ]
   n7-9  [ 198 128  93 ]     n78-81  [ 141 168 173 175 ]   n162-165  [ 152 160 129 618 ]
 n10-12  [ 117 111 129 ]     n82-85  [ 146 132 146 170 ]   n166-169  [ 149 149 170 153 ]
 n13-15  [ 129 112  89 ]     n86-89  [ 142  79 139 174 ]   n170-173  [ 377 467 146 140 ]
 n16-18  [  94 500 193 ]     n90-93  [ 147 150 115 379 ]   n174-177  [ 143 140 139 152 ]
 n19-21  [ 127 128 130 ]     n94-97  [ 150 153 177 152 ]   n178-181  [ 141 127 166 157 ]
 n22-24  [ 150  99 121 ]    n98-101  [ 225 664 174 365 ]   n182-185  [ 167 183 128 179 ]
 n25-27  [ 123 160 112 ]   n102-105  [ 535 220 184 180 ]   n186-189  [ 188 170 146 109 ]
 n28-30  [ 114 134 117 ]   n106-109  [ 160 106 649 179 ]   n190-193  [ 178 151 157 173 ]
 n31-33  [ 117 120 227 ]   n110-113  [ 187 155 143 236 ]   n194-197  [ 131 180 183 407 ]
 n34-36  [  89 101 106 ]   n114-117  [ 152 161 159 107 ]   n198-201  [  88 386 100  97 ]
 n37-39  [ 101  95 748 ]   n118-121  [ 151  93 164 148 ]   n202-205  [ 500 132 199 133 ]
 n40-42  [  88 124  98 ]   n122-125  [ 153 178 166 160 ]   n206-209  [ 136 646 154 132 ]
 n43-45  [ 161 106  87 ]   n126-129  [ 677 147 621 160 ]   n210-213  [ 154 145 157 129 ]
 n46-48  [ 107 126  93 ]   n130-133  [  88 356 120 167 ]   n214-217  [ 172 160 127 190 ]
 n49-51  [ 116 395 110 ]   n134-137  [ 155 159  85 242 ]   n218-221  [ 175 128 158 663 ]
 n52-54  [ 107 482 118 ]   n138-141  [ 121 142 109  91 ]   n222-225  [ 597 132 187 170 ]
 n55-57  [  98 117 100 ]   n142-145  [ 138 163 104 156 ]   n226-229  [ 220 163 133 157 ]
 n58-60  [ 108 101 161 ]   n146-149  [  92 114 142 134 ]   n230-233  [ 160 160 141 124 ]
 n61-63  [ 121  98  92 ]   n150-153  [ 132 113  95 152 ]   n234-237  [  91  88 119  96 ]
 n64-66  [  99  95 251 ]
 n67-69  [ 132 127  98 ]      usr1-2 [  99 115 ]                 mgt [  53 ]

The command hpc_ping_lustre displays the inter-connect round-trip IP latency time in microseconds for the lustre file system

[schou@maya-usr1 ~]$ hpc_ping_lustre
UMBC HPC Lustre Ping Stats (μs) at Sat Feb 28 08:49:30 EST 2015
   n1-3  [  1k  71  65 ]     n70-73  [  40  41  39  53 ]   n154-157  [  49  46  53  45 ]
   n4-6  [  62  68  46 ]     n74-77  [  57  55  47  43 ]   n158-161  [  42  42  50  53 ]
   n7-9  [  62  55  55 ]     n78-81  [  50  38  29  61 ]   n162-165  [  45  43  44  42 ]
 n10-12  [  48  51  51 ]     n82-85  [  79  49  43  61 ]   n166-169  [  49  43  43  42 ]
 n13-15  [  73  54  48 ]     n86-89  [  50  47  41  45 ]   n170-173  [  52  55  47  45 ]
 n16-18  [  56  59  57 ]     n90-93  [  55  46  47  27 ]   n174-177  [  39  46  50  40 ]
 n19-21  [  77  51  46 ]     n94-97  [  55  47  47  37 ]   n178-181  [  46  52  52  42 ]
 n22-24  [  62  49  71 ]    n98-101  [  37  44  52  43 ]   n182-185  [  49  40  42  44 ]
 n25-27  [  57  68  59 ]   n102-105  [  46  35  67  33 ]   n186-189  [  44  52  48  42 ]
 n28-30  [  80  78  61 ]   n106-109  [  52  45  54  53 ]   n190-193  [  48  47  58  46 ]
 n31-33  [  72  66  38 ]   n110-113  [  50  43  53  48 ]   n194-197  [  53  50  41  54 ]
 n34-36  [   0  54  36 ]   n114-117  [  40  47  48  51 ]   n198-201  [  49  45  48  54 ]
 n37-39  [  68  37  45 ]   n118-121  [  49  56  47  49 ]   n202-205  [  41  48  42  50 ]
 n40-42  [  70  73  66 ]   n122-125  [  42  44  44  41 ]   n206-209  [  44  41  49  44 ]
 n43-45  [  73  69  86 ]   n126-129  [  55  47  44  48 ]   n210-213  [  46  48  41  44 ]
 n46-48  [  69  74  82 ]   n130-133  [  43  50  52  48 ]   n214-217  [  44  51  43  46 ]
 n49-51  [  72  56  71 ]   n134-137  [  38  43  52  42 ]   n218-221  [  58  43  42  46 ]
 n52-54  [  65  63  49 ]   n138-141  [  45  47  47  48 ]   n222-225  [  44  46  43  48 ]
 n55-57  [  61  88  58 ]   n142-145  [  57  53  44  44 ]   n226-229  [  45  58  47  44 ]
 n58-60  [  57  34  58 ]   n146-149  [  36  45  47  43 ]   n230-233  [  53  55  62  44 ]
 n61-63  [  58  67  57 ]   n150-153  [  43  45  42  53 ]   n234-237  [  49  50  43  51 ]
 n64-66  [  84  60  57 ]
 n67-69  [  49 618  68 ]      usr1-2 [  69  57 ]                 mgt [  1k ]

The command hpc_net displays the IB network usage in bytes per second

[schou@maya-usr1 ~]$ hpc_net
UMBC HPC IB Network Usage Bytes Per Second at Sat Feb 28 08:50:45 EST 2015
   n1-3  [   0   0   0 ]     n70-73  [   0   0   0   0 ]   n154-157  [   0   0   0   0 ]
   n4-6  [   0   0   0 ]     n74-77  [   0   0   0   0 ]   n158-161  [   0   0   0   0 ]
   n7-9  [   0   0   0 ]     n78-81  [   0   0   0   0 ]   n162-165  [   0   0   0   0 ]
 n10-12  [   0   0   0 ]     n82-85  [   0   0   0   0 ]   n166-169  [   0   0   0   0 ]
 n13-15  [   0   0   0 ]     n86-89  [   0   0   0   0 ]   n170-173  [   0   0   1   0 ]
 n16-18  [   0   0   0 ]     n90-93  [   0   0   0   0 ]   n174-177  [   0   0   0   0 ]
 n19-21  [   0   0   0 ]     n94-97  [   0   0   0   0 ]   n178-181  [   0   0   0   0 ]
 n22-24  [   0   0   0 ]    n98-101  [   0   0   0   0 ]   n182-185  [   0   0   0   0 ]
 n25-27  [   0   0   0 ]   n102-105  [   0   0   0   0 ]   n186-189  [   0   0   0   0 ]
 n28-30  [   0  11   1 ]   n106-109  [   0   0   0   0 ]   n190-193  [   0   0   0   0 ]
 n31-33  [   0   0  56 ]   n110-113  [   0   0   0   0 ]   n194-197  [   0   0   0   0 ]
 n34-36  [   0  56  68 ]   n114-117  [   0   0   0   0 ]   n198-201  [   0   0   0   0 ]
 n37-39  [   0  40  87 ]   n118-121  [   0   0   0   0 ]   n202-205  [   1   0   0   0 ]
 n40-42  [   0   0   0 ]   n122-125  [   0   0   0   0 ]   n206-209  [   0   0   0   0 ]
 n43-45  [   0   0   0 ]   n126-129  [   0   0   0   0 ]   n210-213  [   0   0   0   0 ]
 n46-48  [   0   0   0 ]   n130-133  [   0   0   0   0 ]   n214-217  [   0   0   0   0 ]
 n49-51  [   0   0   0 ]   n134-137  [   0   0   0   0 ]   n218-221  [   0   0   0   0 ]
 n52-54  [   0   0   0 ]   n138-141  [   0   0   0   0 ]   n222-225  [   0   0   0   0 ]
 n55-57  [   0   0   0 ]   n142-145  [   0   0   0   0 ]   n226-229  [   0   0   0   0 ]
 n58-60  [  31  71   0 ]   n146-149  [   0   0   0   0 ]   n230-233  [   0   0   0   0 ]
 n61-63  [   1   0   0 ]   n150-153  [   0   0   0   0 ]   n234-237  [   0   0   0   0 ]
 n64-66  [   1   0   0 ]
 n67-69  [   2   0   0 ]      usr1-2 [   4   0 ]                 mgt [201k ]

The command hpc_power maps out the power usage in Watts across the cluster.

[jongraf1@maya-usr1 ~]$ hpc_power
UMBC HPC Power Usage (Watts) at Mon Mar  2 11:27:09 EST 2015
   n1-3  [  67  71  96 ]     n70-73  [  80 304  84  68 ]   n154-157  [  68  68  96  68 ]
   n4-6  [ 117  97  101]     n74-77  [  60 152  60  76 ]   n158-161  [  76  56  72  68 ]
   n7-9  [ 108 106 107 ]     n78-81  [ 128  92  68  72 ]   n162-165  [  68  76  56  60 ]
 n10-12  [ 116  96 192 ]     n82-85  [  80 220 116  64 ]   n166-169  [  72  92  88  72 ]
 n13-15  [ 199 195 100 ]     n86-89  [ 160 148 132 196 ]   n170-173  [  80  72  84  80 ]
 n16-18  [ 111 109 179 ]     n90-93  [  64  68 128  68 ]   n174-177  [  72  68  84  84 ]
 n19-21  [ 198 201 214 ]     n94-97  [  68  64 160  64 ]   n178-181  [  80  84  84  68 ]
 n22-24  [ 223 209 210 ]    n98-101  [  68  60  76 208 ]   n182-185  [  88  76  72  76 ]
 n25-27  [ 212 221 204 ]   n102-105  [  72  56  68  72 ]   n186-189  [  76  64  72 140 ]
 n28-30  [ 212 186 182 ]   n106-109  [  68 176  60  68 ]   n190-193  [  84  72  88  68 ]
 n31-33  [ 188 180 198 ]   n110-113  [  84  68  60 172 ]   n194-197  [  64  72  84 256 ]
 n34-36  [ 338 337 324 ]   n114-117  [  76  80  68 240 ]   n198-201  [ 244 236 244 228 ]
 n37-39  [ 331 324 321 ]   n118-121  [  52 172  72  64 ]   n202-205  [ 156  76  80  72 ]
 n40-42  [ 322 335 321 ]   n122-125  [  76  76  80  56 ]   n206-209  [  80  64  80  80 ]
 n43-45  [ 341 334 340 ]   n126-129  [  68  72  72 148 ]   n210-213  [  80  80  80  88 ]
 n46-48  [ 337 324 319 ]   n130-133  [ 184 164 132 148 ]   n214-217  [  68  72  60  76 ]
 n49-51  [ 405 381 392 ]   n134-137  [ 132 124 172 164 ]   n218-221  [  76 148 144 100 ]
 n52-54  [ 168 163 170 ]   n138-141  [ 168 132 160 132 ]   n222-225  [  72 100  90 100 ]
 n55-57  [ 166 173 163 ]   n142-145  [ 164 144 176 162 ]   n226-229  [  90  92 100  90 ]
 n58-60  [ 166 157 163 ]   n146-149  [ 184 200 156 132 ]   n230-233  [  72  80  76 248 ]
 n61-63  [ 160 158 154 ]   n150-153  [  68 244 232  64 ]   n234-237  [ 236 236 244 204 ]
 n64-66  [ 156 169 154 ]
 n67-69  [ 152 162 156 ]      usr1-2 [ 192 342 ]                 mgt [ 145 ]
Min	Avg	Max	TotalKW
52	138	405	32.74

The command hpc_roomtemp displays the current temperature in degrees Celsius and Fahrenheit of the room in which the cluster is housed.

[jongraf1@maya-usr1 ~]$ hpc_roomtemp
C	F
17	63

The command hpc_temp displays a heat map of the air intake temperatires of the system in degrees Celsius.

[jongraf1@maya-usr1 ~]$ hpc_temp
UMBC HPC Inlet Temperature (Celcius) at Mon Mar  2 11:50:27 EST 2015
  user1  [  13 ]  user2  [  14 ]    mgt  [  22 ]
    n69  [  12 ]    n51  [  13 ]
    n68  [  11 ]    n50  [  13 ]    n33  [  17 ]
    n67  [  11 ]    n49  [  13 ] n31-32  [  17  17 ]
    n66  [  12 ]    n48  [  13 ] n29-30  [  16  16 ]
    n65  [  11 ]    n47  [  12 ] n27-28  [  16  16 ]
    n64  [  11 ]    n46  [  12 ] n25-26  [  16  16 ]
    n63  [  11 ]    n45  [  12 ] n23-24  [  15  16 ]
    n62  [  10 ]    n44  [  11 ] n21-22  [  15  15 ]
    n61  [  11 ]    n43  [  11 ] n19-20  [  14  14 ]
    n60  [  11 ]    n42  [  11 ] n17-18  [  14  13 ]
    n59  [  11 ]    n41  [  10 ] n15-16  [  13  13 ]
    n58  [  11 ]    n40  [  11 ] n13-14  [  13  13 ]
    n57  [  10 ]    n39  [  11 ] n11-12  [  13  13 ]
    n56  [  11 ]    n38  [  11 ]  n9-10  [  13  13 ]
    n55  [  11 ]    n37  [  11 ]   n7-8  [  13  13 ]
    n54  [  11 ]    n36  [  12 ]   n5-6  [  13  13 ]
    n53  [  12 ]    n35  [  13 ]   n3-4  [  14  13 ]
    n52  [  12 ]    n34  [  13 ]   n1-2  [  17  15 ]

The command hpc_uptime can be used to view the uptime of each node or the time since the last re-image.

[schou@maya-usr1 ~]$ hpc_uptime
UMBC HPC Uptime at Sat Feb 28 08:04:06 EST 2015
   n1-3  [  7h  7h  4d ]     n70-73  [ 18h  7d  7d  7d ]   n154-157  [  7d  7d 18h  7d ]
   n4-6  [ 2we 2we  4d ]     n74-77  [  7d 33h  7d  7d ]   n158-161  [  7d  7d  7d  7h ]
   n7-9  [  7d  7d  7d ]     n78-81  [  7d  4d  7d  7d ]   n162-165  [ 17h 17h 17h 17h ]
 n10-12  [  6d  6d  6d ]     n82-85  [  7d  7d  7d  7d ]   n166-169  [ 17h 17h 17h 17h ]
 n13-15  [  6d  6d  6d ]     n86-89  [  7d  7d  7d  7d ]   n170-173  [ 17h 17h  7d 17h ]
 n16-18  [  6d 2we  6d ]     n90-93  [  7d 17h  7d  7d ]   n174-177  [ 17h 17h 17h 17h ]
 n19-21  [  6d  6d  7d ]     n94-97  [  7d  7d  7d  7d ]   n178-181  [ 17h 17h 17h 17h ]
 n22-24  [ 20h  7d  7d ]    n98-101  [  7d  7d  7d  7d ]   n182-185  [ 17h 17h 17h 17h ]
 n25-27  [ 16h 15h 15h ]   n102-105  [  7d  7d  7d  7d ]   n186-189  [ 17h 17h 17h  7d ]
 n28-30  [ 15h 16h 16h ]   n106-109  [ 16h  7d  7d  7d ]   n190-193  [ 17h 17h 17h 17h ]
 n31-33  [ 16h  7h  7h ]   n110-113  [  7d  5d 18h  7d ]   n194-197  [ 17h 17h 18h  7h ]
 n34-36  [  7h 10h 10h ]   n114-117  [ 16h  7d  7d  7d ]   n198-201  [ 17h 17h  7h 17h ]
 n37-39  [ 10h 10h 10h ]   n118-121  [  7d 16h 16h 16h ]   n202-205  [  7d 17h 17h 17h ]
 n40-42  [ 10h 10h 10h ]   n122-125  [ 16h 16h 16h 16h ]   n206-209  [ 17h 17h 17h 17h ]
 n43-45  [ 10h 10h 10h ]   n126-129  [ 16h 16h 16h  7d ]   n210-213  [ 17h 17h 17h 17h ]
 n46-48  [ 10h 10h 10h ]   n130-133  [  7d  7d  7d  7d ]   n214-217  [ 17h 17h 17h 17h ]
 n49-51  [ 10h 10h 10h ]   n134-137  [  7d  7d  7d  7d ]   n218-221  [ 17h  7d  7d 17h ]
 n52-54  [  6d  7d  7d ]   n138-141  [  7d  7d  7d  7d ]   n222-225  [ 17h 17h 17h 17h ]
 n55-57  [  7d  7d  7d ]   n142-145  [  7d  7d  7d  7d ]   n226-229  [ 17h 17h 17h 17h ]
 n58-60  [ 16h 16h 16h ]   n146-149  [  7d  7d  7d  7d ]   n230-233  [ 17h 17h 17h 17h ]
 n61-63  [ 16h 16h 16h ]   n150-153  [  7d  6d  6d 17h ]   n234-237  [ 17h  7d  7d 17h ]
 n64-66  [ 16h 16h 16h ]
 n67-69  [ 17h 17h 17h ]      usr1-2 [ 4we 4we ]                 mgt [ 4we ]

The command hpc_qosstat can be used to view the current QOS usage, limitations, and partition breakdowns.

[jongraf1@maya-usr1 ~]$ hpc_qosstat
Current QOS usage:
QOS (Wait Reason)            Count
---------------------------- -----
long(None)                      36
medium(None)                   580
medium(Priority)                65
medium(QOSResourceLimit)       160
long(Priority)                   2
long_contrib(None)              67
support(Resources)            1664

QOS Limitations:
      Name  GrpCPUs MaxCPUsPU 
---------- -------- --------- 
    normal                    
      long      256        16 
long_cont+      768       256 
     short                    
    medium     1536       256 
   support                    

Partition     Active    Idle     N/A   Total  (CPUs)
------------ ------- ------- ------- ------- 
batch            699    1033     636    2368
develop*           0      64       0      64
prod             323     735     238    1296
mic                0    8640       0    8640
develop-mic        0       2       0       2