username@linux1.gl.umbc.edu[16]$ ssh username@tara.rs.umbc.edu ... [username@tara-fe1 ~]$
Storage Area | Location | Description |
---|---|---|
User Home | /home/username/ | This is where the user starts after logging in to tara. Only accessible to the user by default. Default size is 100 MB, storage is located on the management node. Backed up. |
Group Saved | Symlink: /home/username/pi_name_saved Mount point: /group_saved/pi_name/ |
A storage area for files to be shared with the user's research group.
Ideal for working on papers or code together, for example, because it is
accessible with read and write permission to
all members of the research group and it is backed up regularly.
04/14/2010: Currently the Group Saved storage area is
being finalized by DoIT. It is not available to most users yet.
|
User Workspace | Symlink: /home/username/pi_name_user Mount point: /umbc/research/pi_name/users/username/ |
A central storage area for the user's own data, accessible only to the user and with read permission to the PI, but not accessible to other group members by default. Ideal for storing output of parallel programs, for example. Nightly snapshots of this data are kept for ten days, in case of accidental deletion. |
Group Workspace | Symlink: /home/username/pi_name_common Mount point: /umbc/research/pi_name/common/ |
The same functionality and intent for use as user workspace, except this area is accessible with read and write permission to all members of the research group. This area is like the group saved area, except it is larger and not backed up. Nightly snapshots of this data are kept for ten days, in case of accidental deletion. |
Scratch space | /scratch/NNNNN | Each compute node on the cluster has 100 GB of local /scratch storage. This storage is convenient temporary space to use while your job is running, but note that your files here persist only for the duration of the job. The space in this area is shared among current users of the node. Use of this area is encouraged over /tmp, which is also needed by critical system processes. Note that a subdirectory NNNNN (e.g. 22704) is created for your job by the scheduler at runtime. For information on how to access scratch space from your job, see the how to run page. |
Tmp Space | /tmp/ | Each machine on the cluster has its own local /tmp storage, as is customary on Unix systems. This scratch area is shared with other users, and is purged periodically by the operating system, therefore is only suitable for temporary scratch storage. Use of /scratch is encouraged over /tmp (see above) |
AFS | /afs/umbc.edu/users/u/s/username/ | Your AFS storage is conveniently available on the cluster, but can only be accessed from the front end node. The "/u/s" in the directory name should be replaced with the first two letters of your username (for example user "straha1" would have directory /afs/umbc.edu/users/s/t/straha1). |
me@mymachine:~> ssh username@tara.rs.umbc.edu Password: (type your password) WARNING: UNAUTHORIZED ACCESS to this computer is in violation of Criminal Law Article section 8-606 and 7-302 of the Annotated Code of MD. NOTICE: This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel. Last login: Sat Dec 5 01:39:23 2009 from hpc.rs.umbc.edu UMBC High Performance Computing Facility http://www.umbc.edu/hpcf -------------------------------------------------------------------------- If you have any questions or problems regarding this system, please send mail to hpc-support@lists.umbc.edu. Remember that the Division of Information Technology will never ask for your password. Do NOT give out this information under any circumstances. -------------------------------------------------------------------------- [username@tara-fe1 ~]$
username@tara-fe1:~$ pwd /home/username
username@tara-fe1:~$ ls -ld ~ drwx------ 23 username pi_name 4096 Oct 29 22:35 /home/username
[araim1@tara-fe1 ~]$ quota Disk quotas for user araim1 (uid 28398): Filesystem blocks quota limit grace files quota limit grace mgt-ib:/home 44500 150000 150000 945 10000 15000 [araim1@tara-fe1 ~]$
blocks 81656 | I am currently using 81656 KB of disk space |
quota 100000 | My disk space soft limit is 100000 KB |
limit 150000 | My disk space hard limit is 150000 KB |
grace | How far over my disk space soft limit I've gone (0 currently since I'm below my soft limit) |
files 8927 | How many files I have |
quota 10000 | My soft limit for the number of files I can have |
limit 15000 | My hard limit for the number of files I can have |
grace | How far over my soft limit I've gone in terms of the number of files (currently 0 since I'm below my soft limit) |
[araim1@tara-fe1 ~]$ groups pi_nagaraj contrib alloc_node_ssh hpcreu pi_gobbert [araim1@tara-fe1 ~]$
username@tara-fe1:~$ ls -l ~/pi_name_common ~/pi_name_user ~/pi_name_saved lrwxrwxrwx 1 username pi_name 33 Jul 29 15:48 pi_name_common -> /umbc/research/pi_name/common lrwxrwxrwx 1 username pi_name 33 Jul 29 15:48 pi_name_user -> /umbc/research/pi_name/users/username lrwxrwxrwx 1 username pi_name 33 Jul 29 15:48 pi_name_saved -> /group_saved/pi_name
username@tara-fe1:~$ ls -ld /umbc/research/pi_name/common drwxrws--- 2 pi_name pi_name 2 Jul 29 14:56 /umbc/research/pi_name/common/
username@tara-fe1:~$ ls -ld /umbc/research/pi_name/users/username drwxr-sr-x 3 username pi_name 3 Sep 21 21:59 /umbc/research/pi_name/users/username
username@tara-fe1:~$ ls -ld /umbc/research/pi_name/ drwxrws--- 3 pi_name pi_name 3 Sep 21 21:59 /umbc/research/pi_name/
[username@tara-fe1 ~]$ ls -ld /group_saved/pi_name/ drwxrws--- 2 pi_name pi_name 4096 Apr 27 16:25 /group_saved/pi_name/
[username@tara-fe1 ~]$ df -h ~/ ~/pi_name_saved/ ~/pi_name_user ~/pi_name_common Filesystem Size Used Avail Use% Mounted on mgt-ib:/home 95G 20G 71G 22% /home mgt-ib:/group_saved 121G 188M 114G 1% /group_saved rstor1-ib:/export/pi_name 100G 493M 100G 1% /umbc/research/pi_name rstor1-ib:/export/pi_name 100G 493M 100G 1% /umbc/research/pi_name [username@tara-fe1 ~]$
[araim1@tara-fe1 ~]$ quota -Qg Disk quotas for group contrib (gid 700): Filesystem blocks quota limit grace files quota limit grace mgt-ib:/usr/cluster 2152832 0 5242880 27309 0 3000000 Disk quotas for group alloc_node_ssh (gid 701): none Disk quotas for group pi_nagaraj (gid 1057): none Disk quotas for group pi_gobbert (gid 32296): Filesystem blocks quota limit grace files quota limit grace mgt-ib:/group_saved 1183428 10485760 10485760 3045 100000 110000 [araim1@tara-fe1 ~]$
[araim1@tara-fe1 ~]$ touch tmpfile [araim1@tara-fe1 ~]$ ls -la tmpfile -rwxrwxr-x 1 araim1 pi_nagaraj 0 Jun 14 17:50 tmpfile [araim1@tara-fe1 ~]$ chmod 664 tmpfile [araim1@tara-fe1 ~]$ ls -la tmpfile -rw-rw-r-- 1 araim1 pi_nagaraj 0 Jun 14 17:50 tmpfile [araim1@tara-fe1 ~]$
[araim1@tara-fe1 ~]$ touch tmpfile [araim1@tara-fe1 ~]$ ls -la tmpfile -rw-rw---- 1 araim1 pi_nagaraj 0 Jun 14 18:00 tmpfile [araim1@tara-fe1 ~]$ chgrp pi_gobbert tmpfile [araim1@tara-fe1 ~]$ ls -la tmpfile -rw-rw---- 1 araim1 pi_gobbert 0 Jun 14 18:00 tmpfile [araim1@tara-fe1 ~]$
[araim1@tara-fe1 ~]$ id uid=28398(araim1) gid=1057(pi_nagaraj) groups=1057(pi_nagaraj),32296(pi_gobbert) [araim1@tara-fe1 ~]$ newgrp pi_gobbert [araim1@tara-fe1 ~]$ id uid=28398(araim1) gid=32296(pi_gobbert) groups=1057(pi_nagaraj),32296(pi_gobbert)
[araim1@tara-fe1 ~]$ touch tmpfile2 [araim1@tara-fe1 ~]$ ls -la tmpfile2 -rw-rw---- 1 araim1 pi_gobbert 0 Jun 14 18:05 tmpfile2 [araim1@tara-fe1 ~]$
umask 007
[araim1@tara-fe1 ~]$ ls -la secret-research.txt -rwxrwxr-x 1 araim1 pi_nagaraj 0 Jun 14 17:02 secret-research.txt
111111101 <-- proposed permissions for our new file AND NOT(000000111) <-- the mask ------------------ = 111111000 = rwxrwx--- <-- permissions for our new file
[araim1@tara-fe1 ~]$ umask 0007 [araim1@tara-fe1 ~]$ umask 022 [araim1@tara-fe1 ~]$ umask 0022 [araim1@tara-fe1 ~]$
pi_name@tara-fe1:~$ chmod g+w ~/pi_name_saved/
pi_name@tara-fe1:~$ chmod g-w ~/pi_name_saved/
straha1@tara-fe1:~> tokens Tokens held by the Cache Manager: Tokens for afs@umbc.edu [Expires Oct 25 00:16] --End of list--
straha1@tara-fe1:~> tokens Tokens held by the Cache Manager: --End of list--
[araim1@tara-fe1 ~]$ kinit Password for araim1@UMBC.EDU: [araim1@tara-fe1 ~]$ aklog [araim1@tara-fe1 ~]$ tokens Tokens held by the Cache Manager: User's (AFS ID 28398) tokens for afs@umbc.edu [Expires Apr 4 05:57] --End of list-- [araim1@tara-fe1 ~]$
username@tara-fe1:~$ mkdir testdir username@tara-fe1:~$ ls -ld testdir drwxr-x--- 2 username pi_name 4096 Oct 30 00:12 testdir username@tara-fe1:~$ cd testdir username@tara-fe1:~/testdir$
username@tara-fe1:~/testdir$ echo HELLO WORLD > testfile username@tara-fe1:~/testdir$ ls -l testfile -rw-r----- 1 username pi_groupname 12 Oct 30 00:16 testfile username@tara-fe1:~/testdir$ cat testfile HELLO WORLD username@tara-fe1:~/testdir$ cat ~/testdir/testfile HELLO WORLD username@tara-fe1:~/testdir$
username@tara-fe1:~/testdir$ rm -i testfile rm: remove regular file `testfile'? y
username@tara-fe1:~/testdir$ cd ~ username@tara-fe1:~$ rmdir testdir
scp username@tara.rs.umbc.edu:math627/hw1/hello.c .
scp /home/bobby-sue/myfile.m username@tara.rs.umbc.edu:matlab/
scp /home/bobby-sue/myfile.m username@tara.rs.umbc.edu:matlab/herfile.m
man scp
cp ~/myfile.m /afs/umbc.edu/users/u/s/username/home/