Linux Hall C Analyzer installation kit
Red Hat 4.2 and 5.0 users please see the new
RedHat Linux Hall C Analyzer installation
kit
The Hall C analyzer can be run on most reasonably current x86 Linux systems.
While not exaustive, some of the requirements include:
- Linux running an ELF compatible kernel. Version 2.0 or later
is recommended.
- ELF libraries. /lib should contain libc.so.5.x.x.
libc.so.5.3.12 or later is recommended.
- GCC. Version 2.7.2 or later is recommended.
- Lots of memory. 32 MB or more is recommended, 16 will probably
work.
- Lots of disk space.
Getting started
These instructions assume that the reader is already familier with
running the Hall C analyzer on HPUX or other systems.
To get started, download "Linux Hall C Analyzer installation kit",
grover.tgz. Pick a
directory to to untar it into. Lets call this directory GROVER. (In
my case, GROVER was ~saw/grover). It is untarred with
cd GROVER
tar -zxf grover.tgz
System configuration
Before installation of the analyzer can begin, several system level
actions are required.
- Arrange to readonly NFS mount a disk containing the Csoft directory.
If at TJNAF, this directory may be mounted from
wood:/home/saw/Csoft.
- Install CERNLIB. At TJNAF, cernlib may be NFS Mounted from
wood:/usr/local/cernlib. (This directory also contains
CERNLIB compiled for the Absoft fortran compiler in
/usr/local/cernlib/96a_absoft.)
- Get a new version of f2c from the installation kit. (This version
came from
ftp://ftp.cc.gatech.edu/pub/linux/devel/lang/fortran/f2c-960717.elf.tgz)
The new f2c package is installed by running the following
from the directory "/",
cd /
tar -zxf GROVER/f2c-960717.elf.tgz
- Copy f77 and url_get from the installation kit to /usr/bin
and /usr/local/bin with
cp GROVER/f77 /usr/bin
chmod 755 /usr/bin/f77
cp GROVER/url_get /usr/local/bin
chmod 755 /usr/local/bin/url_get
- It is strongly recommended that the Network Time Protocol Daemon
(xntpd) be run to keep the Linux machines clock synchronized with the
machine from which the Csoft software is mounted.
Setting up the analyzer source tree
The following instructions set up the master source tree (Csoft directory)
against which individual users may build personal analyzers.
- Edit the file GROVER/Groverup. Find the line that starts with
"Csoft_READONLY". Set the definition of the variable to the path of
the directory Csoft that has been NFS mounted above. If necessary,
edit the definition of CERN_ROOT on the next line.
- Pick an account from which to administer the analyzer software for the
Linux machine. From the root directory of that account source the Csoftup
script:
cd ~
source GROVER/Groverup
This will accomplish several things. It will add environment variables
to ~/.bash_profile and also define these variables for the current
logon. These variables are
NFSDIRECTORY The location of the NFS mounted readonly copy
of the analyzer source code.
CERN_ROOT Directory containing CERN bin and lib directories.
Csoft The local copy of the master source and library tree
Groverup also creates the Csoft directory tree with all of the appropriate
makefiles and source code.
- Do the commands
cd $Csoft/SRC
make
Setting up a "replay" directory in a user account
Each user that will do analysis on the Linux machine must set up a replay
directory similar to what is set up on HP workstations. The following
procedure may be used.
- Create a replay directory under your HPUX account using the
Oscar procedure.
- Make a tar file of the replay directory made by oscar. For example,
if the replay directory is ~/replay, type
tar -zcf myreplay.tgz ~/replay
- Transfer this tar file to your Linux account.
- From your home directory untar the tar file with, for example,
tar -zxf myreplay.tgz
- Edit or create ~/.bash_profile. Copy from the
.bash_profile made by Groverup the definitions for
CERN_ROOT and Csoft. Also add to this file the
proper definition for ENGINE_CONFIG_FILE. Most likely this
will be
export ENGINE_CONFIG_FILE=~/replay/REPLAY.PARM
- Go to the SRC directory under replay.
- Copy the
file Makefile in the GROVER directory to this source directory.
This step will not be necessary once the Linux changes are integrated
into the master copy of this makefile.
- Type make. This should compile a personal analyzer.
- Copy some data runs to local disk, edit REPLAY.PARM appropriately
and replay.
Getting data files
The cache disks, which hold data runs that have been retrieved from
the silo can be read-only mounted by any machine at TJNAF. Here is a
script that will mount the cache disks
on a linux machine.
If a data run is to be analyzed several times, it is helpful to
copy it to a local drive.
Data that has been recently acquired and that is not in the computer
center silo is not available with NFS. However,
a special web server is running on the cdaq machines
that is designed to only deliver data files and only to machines on site.
The command url_get, included in the above installation kit can
be used to retrieve these runs. To do this, use the following for the
file name in REPLAY.PARM:
g_data_source_filename = '|url_get http://cdaq1:2000/%d'
url_get, it it's first contact with the special web server on chdr1 will
determine which machine the data file is on and redirect it's connection
to that machine so that the data file will be delivered in the most
efficient way possible.
url_get may also be used as a standalone program to copy a
run to a local disk. For example, nov96_12976.log may be copied to
the current directory with
url_get http://chdr1:2000/12976 > nov96_12976.log
Other ways of getting data files
In addition to url_get, any command that sends a data file to
standard output may be used in the Hall C analyzer. For example, rsh
might be used if url_get is not working or the data file is not
in a location that the special web server searches. For example, the
following filename specification will work for recently acuired data on
cdaq1.
g_data_source_filename = 'rsh cdaq1 -l yourusername "cat /cdaq1/usr/user1/cdaq/coda/runlist/nov96_%d.log"'
For the remote shell command to work, you must list the node name of
your linux machine in the .rhosts file in your home directory
on cdaq1. This is generally considered not to be a good idea from a
security point of view.
Using compressed files
Saving compressed files on your local disk can increase the amound of
data that you can hold locally. File name secifications like
g_data_source_filename = '|gunzip < nov96_%d.log.gz'
will decompress the data file on the fly. The event reading routines
will actually automatically detect compressed files and decompress them, so
the specification
g_data_source_filename = 'nov96_%d.log.gz'
is sufficient.
Note to users of non Hall C applications:. If you are trying to
apply these techniques of retrieving data to CODA replay applications other
than the Hall C analyzer, you may need to get the
improved version of evio.c.
Last update 19 December 1996
saw@jlab.org