Chazin Home Chazin Home | Ca-binding Protein DB | Vanderbilt Home Vanderbilt Home
Research Description | Publications | Wisdom | Search
How to contribute | About this page

UNIX Tips

collected and edited by Christoph Weber
last update May 19, 1998

This document is getting large... Use the find command of your browser to quickly locate what you are looking for. 

Becoming yourself from someone else's login

su <yourID> will retain the previous person's environment, a good way to go from your ID to root
su - <yourID> (note the dash) will read all your dot files and land you in your home, just like a real login

Software locations for Chazin group

In general, local programs are found in /usr/local/bin as is standard in Unix. This directory may or may not be a link to another location.

Core files

I assume hardly anyone looks at core files, but they are generated more often than we like and can be HUGE! On the top of your .cshrc add: limit coredumpsize 0k Remove any other lines that specify similar stuff. If there is a really hairy software problem, I will ask you to edit this line and deliver the ressulting core file to me, but it hasn't happened yet.

You can also create a crontab for yourself to do some regular housekeeping for you: 
On SUN: 
setenv EDITOR vi 
crontab -e 
lands you in vi with your crontab file loaded, if one exists; get into append or insert mode and type: 
# remove all my core files daily 
05 23 * * * find ~ -name core -exec rm -f {} \; 
Exit vi the usual way. This searches your home area for core files at 11:05 pm everyday and removes all of them. 
On SGI: create a file with the above lines, then type: crontab <file> This second method works on SUNs also.

Disk space

If there is not much space on your machine and you wonder why (and you know that you are not directly the culprit!): Check the /var partition for large files. (You may need to be root.) du -k /var (SunOS 4.1.x does not know the option -k!) It should be obvious in which directory very large files are hiding. Some machines have a /var/crash area that holds dumps from crashes. If research computing hasn't requested them from you for inspection, these may go. Also, large LOG files may be truncated to conserve space. If you had any trouble recently check with me or someone knowledgeable before you remove any log info. If these files are large, there may be a good reason for it!

Cut and paste text files by columns

How many times have you wanted to edit those results files columnwise? Here's a generic UNIX way: (startcol and endcol refer to character columns, including whitespace. Tab counts as 1.) 
cut -cstartcol-endcol file {>receiving file} paste sourcefile1 sourcefile2 {>final file} 
I like to preview and fine-tune my commands by looking at the result on standard out (the shell window) and then redirecting output to a (temporary) file when things look right. Those feeling really adventurous can combine things into a single command line...

File compression

Probably everyone is using compress to conserve disk space and/or net bandwidth. While compress has its merits, mainly that its efficient and available on nearly all UNIXes, it has NO consistency checks. And there are even more efficient tools today: Under /tsri/gnu/sun4/bin you'll find gzip. Gzip and its companions gunzip, gzcat, gzmore, gzgrep (and so on) will work with files compressed with gzip, compress, pack and zip (the UNIX variant of PKZIP, perhaps it can handle PC files?). I have used gzip successfully for the last two years and SGI now ships it as part of their systems. That should speak for itself. Gzip compressed files end with .gz. To get at gzip, edit your .cshrc:

set path = ( /tsri/gnu/sun4/bin $path )
setenv MANPATH $MANPATH:/tsri/gnu/man
setenv GZIP "<whatever>" (check gzip -h for available options)

On SGIs do nothing. gzip is already in your path.

Gzip can work through directory trees: gzip -r -9 <some dir> will compress every file it find starting from <some dir> downwards. The -9 switch specifies best compression at the expense of speed. Great for archiving.

For very efficient compression of some directory and its contents:
tar cvf - dir | gzip -9 > dir.tar.gz

To uncompress: gzcat dir.tar.gz | tar xvf -

Netscape

To start netscape without the annoying license/RSA security/etc. stuff page: call it up with a filename or URL, e.g.: netscape http://www,scrips.edu/~chazin

Renaming a group of files

Suppose you have a group of files, named file_jan96.1, file_jan96.2, file_jan96.3, etc. and you want to rename them to something like experiment_1_jan96.txt, experiment_2_jan96.txt, experiment_2_jan96.txt, etc.

Use a script like this:

#!/bin/csh -f
foreach x ( file_jan96.* )
mv $x experiment_$x:e_jan96.txt
end