UMBC logo
UMBC High Performance Computing Facility
Using your account on tara
The following page gives a tour through a typical tara account. While it is a standard Unix account, there are several special features to note, including the location and intent of the different storage areas and the availability of software. If you're having trouble with any of the material, or believe that your account may be missing something, contact user support.

Connecting to tara

The only node with a connection to the outside network is the user node. Internally to the system, its full hostname is tara-fe1.rs.umbc.edu (notice the "-fe1"), but from the outside its name is tara.rs.umbc.edu. To log in to the system, you must use a secure shell like SSH from Unix/Linux, PuTTY from Windows, or similar. You connect to the front end node, which is the only node visible from the internet. For example, suppose we're connecting to tara from the Linux machine "linux1.gl.umbc.edu".
username@linux1.gl.umbc.edu[16]$ ssh username@tara.rs.umbc.edu
...
[username@tara-fe1 ~]$ 
where "username" is your UMBC username (that you use in myUMBC and that is the base part of your UMBC e-mail address). You will be prompted for your password when connecting; your password is your myUMBC password.

As another example, suppose we're SSHing to tara from a Windows machine with PuTTY. When setting up a connection, use "tara.rs.umbc.edu" as the hostname. Once you connect, you will be prompted for your username and password, as mentioned above.

If you intend to make plots with MATLAB or IDL (or do some other graphical work) then see Running X Windows programs remotely.

Available software

In addition to the software you'd find on a typical Linux system, the following are also available on tara: See resources for tara for a more complete list of the available software, along with tutorials to help you get started.

Storage areas

The directory structure that DoIT will set up as part of your account creation is designed to facilitate the work of research groups consisting of several users and also reflects the fact that all HPCF accounts must be sponsored by a faculty member at UMBC. This sponsor will be referred to as PI (for principal investigator) in the following. To give a concrete example, let us assume that you are a student with username "username", who works in the research group of a PI with username "pi_name", who sponsors the account. Let's assume that the account for the PI has already been created and discuss the example from the standpoint of a new account for the student "username" being created. The following storage areas should be available in the specified locations (see System Description for an overview).
Storage Area Location Description
User Home /home/username/ This is where the user starts after logging in to tara. Only accessible to the user by default. Default size is 100 MB, storage is located on the management node. Backed up.
Group Saved Symlink: /home/username/pi_name_saved
Mount point: /group_saved/pi_name/
A storage area for files to be shared with the user's research group. Ideal for working on papers or code together, for example, because it is accessible with read and write permission to all members of the research group and it is backed up regularly.
04/14/2010: Currently the Group Saved storage area is being finalized by DoIT. It is not available to most users yet.
User Workspace Symlink: /home/username/pi_name_user
Mount point: /umbc/research/pi_name/users/username/
A central storage area for the user's own data, accessible only to the user and with read permission to the PI, but not accessible to other group members by default. Ideal for storing output of parallel programs, for example. Nightly snapshots of this data are kept for ten days, in case of accidental deletion.
Group Workspace Symlink: /home/username/pi_name_common
Mount point: /umbc/research/pi_name/common/
The same functionality and intent for use as user workspace, except this area is accessible with read and write permission to all members of the research group. This area is like the group saved area, except it is larger and not backed up. Nightly snapshots of this data are kept for ten days, in case of accidental deletion.
Scratch space /scratch/NNNNN Each compute node on the cluster has 100 GB of local /scratch storage. This storage is convenient temporary space to use while your job is running, but note that your files here persist only for the duration of the job. The space in this area is shared among current users of the node. Use of this area is encouraged over /tmp, which is also needed by critical system processes. Note that a subdirectory NNNNN (e.g. 22704) is created for your job by the scheduler at runtime.
Tmp Space /tmp/ Each machine on the cluster has its own local /tmp storage, as is customary on Unix systems. This scratch area is shared with other users, and is purged periodically by the operating system, therefore is only suitable for temporary scratch storage. Use of /scratch is encouraged over /tmp (see above)
AFS /afs/umbc.edu/users/u/s/username/ Your AFS storage is conveniently available on the cluster, but can only be accessed from the front end node. The "/u/s" in the directory name should be replaced with the first two letters of your username (for example user "straha1" would have directory /afs/umbc.edu/users/s/t/straha1).
"Symlink" indicates that you will have a link created for you to the storage area for your convenience. This link is placed in your home directory so that you can easily navigate to your storage. For example, you may use the command "cd ~/pi_name_user" to get to User Workspace. "Mount point" indicates the actual location of the storage on tara's filesystem.

The amount of space available in the PI-specific areas depend on the allocation given to your research group. Your AFS quota is determined by DoIT. The quota for everyone's home directory is generally the same.

Some research groups have additional storage areas, or have storage organized in a different way than shown above. For more information, contact your PI or User Support. Also note that some users may belong to multiple research groups, so may have access to the storage of several PIs.

The tutorial below will walk you through your home directory, and the specialized storage areas on tara.

A brief tour of your account

This section assumes that you already have an account, and you're a member of a research group. If you need to apply for an account, see the account request form. If you're not a member of a research group, you won't have access to the various group spaces.

Home directory

First, log in to tara from your local machine by SSH:
me@mymachine:~> ssh username@tara.rs.umbc.edu
Password: (type your password)
WARNING: UNAUTHORIZED ACCESS to this computer is in violation of Criminal
         Law Article section 8-606 and 7-302 of the Annotated Code of MD.

NOTICE:  This system is for the use of authorized users only. 
         Individuals using this computer system without authority, or in
         excess of their authority, are subject to having all of their
         activities on this system monitored and recorded by system
         personnel.


Last login: Sat Dec  5 01:39:23 2009 from hpc.rs.umbc.edu

  UMBC High Performance Computing Facility	     http://www.umbc.edu/hpcf
  --------------------------------------------------------------------------
  If you have any questions or problems regarding this system, please send
  mail to hpc-support@lists.umbc.edu.

  Remember that the Division of Information Technology will never ask for
  your password. Do NOT give out this information under any circumstances.
  --------------------------------------------------------------------------

[username@tara-fe1 ~]$
The Bash shell as the default shell for tara users - this will be the shell that you are assumed to be in for purposes of documentation and examples on this webpage. Check your shell with the command "echo $SHELL" or by saying "env" and searching for SHELL in the resulting lines of output. At any given time, the directory that you are currently in is referred to as your current working directory. Since you just logged in, your home directory is your current working directory. The "~" symbol is shorthand for your home directory. The program "pwd" tells you the full path of the current working directory, so let's run pwd to see where your home directory really is:
username@tara-fe1:~$ pwd
/home/username
Now let's use ls to get more information about your home directory.
username@tara-fe1:~$ ls -ld ~
drwx------ 23 username pi_name 4096 Oct 29 22:35 /home/username
There is quite a bit of information on this line. If you're not sure of what it means, this would be a good time to find a Linux/Unix reference. One example available on the web is The Linux Cookbook. What we wanted to emphasize was the string of permissions. The string "drwx------" indicates that only you have read, write, or execute access to this directory. Therefore your home is private. The space in your home directory is limited though; you can see this by using the "quota" command:
[araim1@tara-fe1 ~]$ quota
Disk quotas for user araim1 (uid 28398): 
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
   mgt-ib:/home   44500  150000  150000             945   10000   15000        
[araim1@tara-fe1 ~]$ 
These numbers tell you how much space you're using and how much is available. We limit two aspects of your storage: KB of disk space and the number of files you can create. For each of those quantities, you have two limits: a soft limit and a hard limit. When you reach your soft limit, you will have a little while to reduce your usage. If you wait too long before reducing your usage, or if you pass your hard limit, then you won't be able to make more files or enlarge existing files until you delete enough to get under your soft limit. Your hard and soft limits are also referred to as your quotas.
blocks 81656 I am currently using 81656 KB of disk space
quota 100000 My disk space soft limit is 100000 KB
limit 150000 My disk space hard limit is 150000 KB
grace How far over my disk space soft limit I've gone (0 currently since I'm below my soft limit)
files 8927 How many files I have
quota 10000 My soft limit for the number of files I can have
limit 15000 My hard limit for the number of files I can have
grace How far over my soft limit I've gone in terms of the number of files (currently 0 since I'm below my soft limit)
In your home directory, you are only allowed to create up to 10,000 files that take up a total of 100,000 kB of storage space. That isn't much space for high performance computing and so you should plan on using the special storage areas that have been set up for you.

Group Membership

Your account has membership in one or more Unix groups. On tara, groups are usually (but not always) organized by research group and named after the PI. The primary purpose of these groups is to facilitate sharing of files with other users, through the Unix permissions system. To see your Unix groups, try the following command:
[araim1@tara-fe1 ~]$ groups
pi_nagaraj contrib alloc_node_ssh hpcreu pi_gobbert
[araim1@tara-fe1 ~]$ 
In the example above, the user is a member of five groups - two of them correspond to research groups.

Special storage areas

A typical account on tara will have access to several central storage areas. These can be classified as "backed up" and "not backed up". They can also be classified as "user" or "group" storage. See above for the complete descriptions. Let's look at these areas by running the following command.
username@tara-fe1:~$ ls -l ~/pi_name_common ~/pi_name_user ~/pi_name_saved
lrwxrwxrwx 1 username pi_name 33 Jul 29 15:48 pi_name_common -> /umbc/research/pi_name/common
lrwxrwxrwx 1 username pi_name 33 Jul 29 15:48 pi_name_user -> /umbc/research/pi_name/users/username
lrwxrwxrwx 1 username pi_name 33 Jul 29 15:48 pi_name_saved -> /group_saved/pi_name
We can see that these are symbolic links instead of normal directories. Whenever you access "/home/username/pi_name_common" for example, you are actually redirected to "/umbc/research/pi_name/common". This has been set up for you as a convenience, so that you can easily find your workspace. Note that your links may point to different places than the ones shown here, depending on your research group. If you are not a member of a research group (e.g. a student in MATH 627), you may not have any of the group areas set up. The "ls" listing has given us information about the symbolic links.

Group Saved Area

Now let's check out the actual Group Saved area.
[username@tara-fe1 ~]$ ls -ld /group_saved/pi_name/
drwxrws--- 2 gobbert pi_gobbert 4096 Apr 27 16:25 /group_saved/pi_name/
The permissions in Group Saved are different than the permissions on your home directory: Your Group Saved area has a quota, which isn't displayed with the plain "quota" command (which we demonstrated earlier). The quota for this space can be seen with the following command:
[araim1@tara-fe1 ~]$ quota -Qg
Disk quotas for group contrib (gid 700): 
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
mgt-ib:/usr/cluster
                2152832       0 5242880           27309       0 3000000        
Disk quotas for group alloc_node_ssh (gid 701): none
Disk quotas for group pi_nagaraj (gid 1057): none
Disk quotas for group pi_gobbert (gid 32296): 
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
mgt-ib:/group_saved
                1183428  10485760 10485760            3045  100000  110000        
[araim1@tara-fe1 ~]$ 
The "-g" option will display quotas associated with your group membership(s), and the "-Q" option will suppress error messages (which are normal in this case, but make the output harder to read). In this sample output, we can see that our group has used 1183428 KB out of an available 10485760 KB. (Note that this quota size is not typical of most groups).

Besides your quota, there is also a physical limit on the disk space available on the filesystems where your space is hosted. This space can be checked with the following

[username@tara-fe1 ~]$ df -h ~/ ~/pi_name_saved/ ~/pi_name_user ~/pi_name_common
Filesystem            Size  Used Avail Use% Mounted on
mgt-ib:/home           95G   20G   71G  22% /home
mgt-ib:/group_saved   121G  188M  114G   1% /group_saved
rstor1-ib:/export/pi_name
                      100G  493M  100G   1% /umbc/research/pi_name
[username@tara-fe1 ~]$
Of course your output will depend on which research group to which you belong. The "rstor1-ib:/export/pi_name" tells you where the data is physically stored. The "rstor1-ib" is the name of the remote machine that stores the data, and the "/export/pi_name" tells you where on that machine the data is stored. The "/umbc/research/pi_name" tells you where that directory is mounted on the local machine. In this case, all three directories are located on the same underlying filesystem. However, this may not be the case for everyone.

More about permissions

Standard Unix permissions are used on tara to control which users have access to your files. We've already seen some examples of this. It's important to emphasize that this is the mechanism that determines the degree of sharing, and on the other hand privacy, of your work on this system. In setting up your account, we've taken a few steps to simplify things, assuming you use the storage areas for the basic purposes they were designed. This should be sufficient for many users, but you can also customize your use of the permissions system if you need additional privacy, to share with additional users, etc.

Changing a file's permissions

For existing files and directories, you can modify permissions with the "chmod" command. As a very basic example:
[araim1@tara-fe1 ~]$ touch tmpfile
[araim1@tara-fe1 ~]$ ls -la tmpfile 
-rwxrwxr-x 1 araim1 pi_nagaraj 0 Jun 14 17:50 tmpfile
[araim1@tara-fe1 ~]$ chmod 664 tmpfile 
[araim1@tara-fe1 ~]$ ls -la tmpfile 
-rw-rw-r-- 1 araim1 pi_nagaraj 0 Jun 14 17:50 tmpfile
[araim1@tara-fe1 ~]$ 
See "man chmod" for more information, or the Wikipedia page for chmod

Changing a file's group

For users in multiple groups, you may find the need to change a file's ownership to a different group. This can be accomplished on a file-by-file basis by the "chgrp" command
[araim1@tara-fe1 ~]$ touch tmpfile 
[araim1@tara-fe1 ~]$ ls -la tmpfile 
-rw-rw---- 1 araim1 pi_nagaraj 0 Jun 14 18:00 tmpfile
[araim1@tara-fe1 ~]$ chgrp pi_gobbert tmpfile 
[araim1@tara-fe1 ~]$ ls -la tmpfile 
-rw-rw---- 1 araim1 pi_gobbert 0 Jun 14 18:00 tmpfile
[araim1@tara-fe1 ~]$ 
You may also change your "currently active" group using the "newgrp" command
[araim1@tara-fe1 ~]$ id
uid=28398(araim1) gid=1057(pi_nagaraj) groups=1057(pi_nagaraj),32296(pi_gobbert)
[araim1@tara-fe1 ~]$ newgrp pi_gobbert
[araim1@tara-fe1 ~]$ id
uid=28398(araim1) gid=32296(pi_gobbert) groups=1057(pi_nagaraj),32296(pi_gobbert)
Now any new files created in this session will belong to the group pi_gobbert
[araim1@tara-fe1 ~]$ touch tmpfile2
[araim1@tara-fe1 ~]$ ls -la tmpfile2 
-rw-rw---- 1 araim1 pi_gobbert 0 Jun 14 18:05 tmpfile2
[araim1@tara-fe1 ~]$ 

Umask

By default, your account will have a line in ~/.bashrc which sets your "umask"
umask 007
The umask is traditionally set to 022 on Unix systems, so this is a customization on tara. The umask helps to determine the permissions for new files and directories you create. Usually when you create a file, you don't specify what its permissions will be. Instead some defaults are used, but they may be too liberal. For example, suppose we created a file that got the following default permissions.
[araim1@tara-fe1 ~]$ ls -la secret-research.txt
-rwxrwxr-x 1 araim1 pi_nagaraj 0 Jun 14 17:02 secret-research.txt
All users on the system could read this file if they had access to its directory. The umask allows us to turn off specific permissions for newly created files. Suppose we want all new files to have "rwx" turned off for anyone who isn't us (araim1) or in our group (pi_nagaraj). A umask setting of "007" accomplishes this. To quickly illustrate what this means, notice that 007 is three digit number in octal (base 8). We can also represent it as a 9 digit binary number 000000111. We can also represent "rwxrwxr-x" (from our file above) as a 9 digit binary number 111111101; dashes correspond to 0's and letters correspond to 1's. The umask is applied the following way to set the new permissions for our file
        111111101    <-- proposed permissions for our new file
AND NOT(000000111)   <-- the mask
------------------
=       111111000
=       rwxrwx---    <-- permissions for our new file
In other words, umask 007 ensures that outside users have no access to your new files. See the Wikipedia entry for umask for more explanation and examples. On tara, the storage areas' permissions are already set up to enforce specific styles of collaboration. We've selected 007 as the default umask to not prevent sharing with your group, but to prevent sharing with outside users. If you generally want to prevent your group from modifying your files (for example), even in the shared storage areas, you may want to use a more restrictive umask.

If you have any need to change your umask, you can do so permanently by editing ~/.bashrc, or temporarily for the current SSH session by using the umask command directly.

[araim1@tara-fe1 ~]$ umask
0007
[araim1@tara-fe1 ~]$ umask 022
[araim1@tara-fe1 ~]$ umask
0022
[araim1@tara-fe1 ~]$ 
Notice that typing "umask" with no arguments reports your current umask setting.

Configuring permissions of the Group Saved storage area (PI only)

If you are a PI, you can add or remove the group write permissions (the w in r-s/rws) by using the chmod command. To add group write permissions (the w) and let all members of your group create or delete files and directories in your Group Saved area:
pi_name@tara-fe1:~$ chmod g+w ~/pi_name_saved/
To remove group write permissions so that only you, the PI pi_name, can create or delete files in your Group Saved directory:
pi_name@tara-fe1:~$ chmod g-w ~/pi_name_saved/
You could of course have used the full path to ~/pi_name_common/ in those commands: /group_saved/pi_name

Group Workspace

Working in this area is similar to working in Group Saved. You may use the same commands to check the permissions, amount of free space, etc. If you are a PI, you may use the same commands as before to set the permissions for this directory.
username@tara-fe1:~$ ls -ld /umbc/research/pi_name/common
drwxrwsr-x 2 pi_name pi_name 2 Jul 29 14:56 /umbc/research/pi_name/common/

The major differeces between this area and Group Saved is that you get much more space here, but full backups are maintained in Group Saved. The intent is for you to be able to store large datasets from computations here. Group Saved is intended for you to work on a paper with your research group, or to store your source code.

User Workspace

Now lets look at your User workspace. This space is intended for you to store the results of your own computations. As opposed to Group Workspace, which is intended for data to be shared with group.
username@tara-fe1:~$ ls -ld /umbc/research/pi_name/users/username
drwxr-sr-x 3 username pi_name 3 Sep 21 21:59 /umbc/research/pi_name/users/username
Since the permissions are drwxr-sr-x, only the owner of the directory (you: username) can create or delete files in that directory, but anyone in the group (your research group pi_name) can list the contents of the directory or cd to it.

AFS Storage

Your AFS partition is the directory where your personal files are stored when you use the DoIT computer labs or the gl.umbc.edu systems. You can access this partition from tara. In order to access AFS, you need an AFS token. One is given to you when you log in, but it expires a few hours after you log in. You can see whether you currently have an AFS token by running the tokens command:
straha1@tara-fe1:~> tokens

Tokens held by the Cache Manager:

Tokens for afs@umbc.edu [Expires Oct 25 00:16]
   --End of list--
The "Tokens for afs@umbc.edu" line tells me that I currently have tokens that let me access UMBC's AFS storage. The expiration date ("Expires Oct 25 00:16") tells me when my tokens will expire. When your tokens expire, you will get this message:
straha1@tara-fe1:~> tokens

Tokens held by the Cache Manager:

   --End of list--
Notice the lack of the "Tokens for afs@umbc.edu" line. Once your tokens expire, you have to get new ones in order to access AFS again. To do that, use the aklog command:
straha1@tara-fe1:~> aklog
straha1@tara-fe1:~> tokens

Tokens held by the Cache Manager:

Tokens for afs@umbc.edu [Expires Oct 25 00:28]
   --End of list--

How to create simple files and directories

Now let's try creating some files and directories. First, let's make a directory named "testdir" in your home directory.
username@tara-fe1:~$ mkdir testdir
username@tara-fe1:~$ ls -ld testdir
drwxr-x--- 2 username pi_name 4096 Oct 30 00:12 testdir
username@tara-fe1:~$ cd testdir
username@tara-fe1:~/testdir$ _
The mkdir command created the directory testdir. Since your current working directory was ~ when you ran that command, testdir is inside your home directory. Thus it is said to be a subdirectory of ~. The cd command changed your working directory to ~/testdir and that is reflected by the new prompt: username@tara-fe1:~/testdir$. Now lets create a file in testdir:
username@tara-fe1:~/testdir$ echo HELLO WORLD > testfile
username@tara-fe1:~/testdir$ ls -l testfile
-rw-r----- 1 username pi_groupname 12 Oct 30 00:16 testfile
username@tara-fe1:~/testdir$ cat testfile
HELLO WORLD
username@tara-fe1:~/testdir$ cat ~/testdir/testfile
HELLO WORLD
username@tara-fe1:~/testdir$ _
The echo command simply prints out its arguments ("HELLO WORLD"). The ">" tells your shell to send the output of echo into the file testfile. Since your current working directory is ~/testdir, testfile was created in testdir and its full path is then ~/testdir/testfile. The program cat prints (aka concatenates) out the contents of a file (where the argument to cat, testfile or ~/testdir/testfile is the file to print out). As you can see, testfile does indeed contain "HELLO WORLD". Now let's delete testdir and testfile. To use the "rmdir" command and remove our directory, we must first ensure that it is empty:
username@tara-fe1:~/testdir$ rm -i testfile
rm: remove regular file `testfile'? y
Now we delete the testdir directory with rmdir:
username@tara-fe1:~/testdir$ cd ~
username@tara-fe1:~$ rmdir testdir

How to copy files to and from tara

Probably the most general way to transfer files between machines is by Secure Copy (scp). Because some remote filesystems may be mounted to tara, it may also be possible to transfer files using "local" file operations like cp, mv, etc.

Method 1: Secure Copy (scp)

Th tara cluster only allows secure connection from the outside. Secure Copy is the file copying program that is part of Secure Shell (SSH). To transfer files in and out of the tara, you must use scp or similar secure software (such as WinSCP or SSHFS). On Unix machines such as Linux or MacOS X, you can execute scp from a terminal window. Let's explain the use of scp by the following example: the user "username" has a file hello.c in sub-directory math627/hw1 from his home directory on tara. To copy the file to the current directory on another Unix/Linux system with scp, use
scp username@tara.rs.umbc.edu:math627/hw1/hello.c . 
Notice carefully the period "." at the end of the above sample command; it signifies that you want the file copied to your current directory (without changing the name of the file). You can copy data in the other direction too (from your machine to tara). Let's say you have a file /home/bobby-sue/myfile.m on your machine and you want to copy it to a subdirectory matlab/ of your your tara home directory:
scp /home/bobby-sue/myfile.m username@tara.rs.umbc.edu:matlab/
The / after matlab ensures that scp will fail if the directory matlab does not exist. If you had left out the / and matlab was not a directory already, then scp would create a file matlab that contains all of /home/bobby-sue/myfile.m's contents (which is clearly not what you want). Coincidentally, that means you could also do this:
scp /home/bobby-sue/myfile.m username@tara.rs.umbc.edu:matlab/herfile.m
which would copy /home/bobby-sue/myfile.m to the file matlab/herfile.m in your home directory on tara. Note that the source and destination have different file names now.

As with SSH, you can leave out the username@, if your username is the same on both machines. That is the case on the gl login servers and the general lab Mac OS X and Linux machines. If you issue the command from within UMBC, you can also abbreviate the machine name to tara.rs. See the scp manual page for more information. You can access the scp manual page (referred to as a "man page") on a unix machine by running the command:

man scp

Method 2: AFS

Another way to copy data is to use the UMBC-wide AFS filesystem. The AFS filesystem is where your UMBC GL data is stored. That includes your UMBC email, your home directory on the gl.umbc.edu login servers and general lab Linux and Mac OS X machines, your UMBC webpage (if you have one) and your S: and some other drives on the general lab windows machines. Any data you put in your AFS partition will be available on tara in the directory /afs/umbc.edu/users/u/s/username/ where username should be replaced with your username, and u and s should be replaced with the first and second letters of your username, respectively. As an example, suppose you're using a Mac OS X machine in a UMBC computer lab and you've SSHed into tara in a terminal window. Then, in that window you can type:
cp ~/myfile.m /afs/umbc.edu/users/u/s/username/home/
and your file myfile.m in your tara home directory will be copied to myfile.m in your AFS home directory. Then, you can access that copy of the file on the Mac you're using, via ~/myfile.m. Note that it's only a copy of the file; ~/myfile.m on your Mac is not the same file as ~/myfile.m on tara. However, ~/myfile.m on your Mac is the same as /afs/umbc.edu/users/u/s/username/home/myfile.m on both your Mac and tara.

Make sure you've noted the section on AFS tokens from earlier in this page, if you plan on using this mount.

How to use the queuing system

See our How to compile C programs tutorial to learn how to run both serial and parallel programs on the cluster.