Logo use licensed from Linux Journal
The Redhat
Cool it works with Linux

Linux Hall C Analyzer installation kit

The Hall C analyzer can be run on most reasonably current x86 Linux systems. While not exaustive, some of the requirements include:

Getting started

These instructions assume that the reader is already familier with running the Hall C analyzer on HPUX or other systems.

To get started, download "Linux Hall C Analyzer installation kit", grover_rh1.tgz. Pick a directory to to untar it into. Lets call this directory GROVER. (In my case, GROVER was ~saw/grover). It is untarred with

	cd GROVER
	tar -zxf grover_rh1.tgz

System configuration

Before installation of the analyzer can begin, several system level actions are required.
  1. Log on as root and execute the script grovermount:
    	cd whatever/GROVER
    	./grovermount
    
    This script NFS mounts disks containing the Csoft source tree, cernlib, and the Jlab Silo cache directories. It also installs the fortran compiler if necessary.
  2. It is strongly recommended that the Network Time Protocol Daemon (xntpd) be run to keep the Linux machines clock synchronized with the machine from which the Csoft software is mounted.

Setting up the analyzer source tree

The following instructions set up the master source tree (Csoft directory) against which individual users may build personal analyzers.
  1. Edit the file GROVER/Groverup. Find the line that starts with "Csoft_READONLY". Set the definition of the variable to the path of the directory Csoft that has been NFS mounted above. If necessary, edit the definition of CERN_ROOT on the next line.
  2. Pick an account from which to administer the analyzer software for the Linux machine. From the root directory of that account source the Csoftup script:
    	cd ~
    	source GROVER/Groverup
    
    This will accomplish several things. It will add environment variables to ~/.bash_profile and also define these variables for the current logon. These variables are
    	NFSDIRECTORY	The location of the NFS mounted readonly copy
    			of the analyzer source code.
    	CERN_ROOT	Directory containing CERN bin and lib directories.
    	Csoft		The local copy of the master source and library tree
    
    Groverup also creates the Csoft directory tree with all of the appropriate makefiles and source code.
  3. Do the commands
    	cd $Csoft/SRC
    	make
    

Setting up a "replay" directory in a user account

Each user that will do analysis on the Linux machine must set up a replay directory similar to what is set up on HP workstations. The following procedure may be used.
  1. Create a replay directory under your HPUX account using the Oscar procedure.
  2. Make a tar file of the replay directory made by oscar. For example, if the replay directory is ~/replay, type
    	tar -zcf myreplay.tgz ~/replay
    
  3. Transfer this tar file to your Linux account.
  4. From your home directory untar the tar file with, for example,
    	tar -zxf myreplay.tgz
    
  5. Edit or create ~/.bash_profile. Copy from the .bash_profile made by Groverup the definitions for CERN_ROOT and Csoft. Also add to this file the proper definition for ENGINE_CONFIG_FILE. Most likely this will be
    	export ENGINE_CONFIG_FILE=~/replay/REPLAY.PARM
    
  6. Go to the SRC directory under replay.
  7. Using the file Makefile in the GROVER directory, modify the Makefile in the SRC to be Linux compatible. Your Makefile may already have these modifications.
  8. Type make. This should compile a personal analyzer.
  9. Copy some data runs to local disk, edit REPLAY.PARM appropriately and replay.

Getting data files

The cache disks, which hold data runs that have been retrieved from the silo can be read-only mounted by any machine at TJNAF. The grovermount script above should have mounted these cache disks directory under /cache. The command to request that files be placed in the cache directory must be run from a CUE machine.

If a data run is to be analyzed several times, it is helpful to copy it to a local drive.

Other ways of getting data files

Any command that sends a data file to standard output may be used in the Hall C analyzer to get a run. For example, rsh might be used if the data file is not in the silo and thus can't be moved to the cache disks. For example, the following filename specification will work for recently acuired data on cdaqh1.
g_data_source_filename = 'rsh cdaqh1 -l yourusername "cat /home/cdaq/coda/runlist/dec97_%d.log"'
For the remote shell command to work, you must list the node name of your linux machine in the .rhosts file in your home directory on cdaqh1. This is generally considered not to be a good idea from a security point of view.

Using compressed files

Saving compressed files on your local disk can increase the amound of data that you can hold locally. File name secifications like
	g_data_source_filename = '|gunzip < nov96_%d.log.gz'
will decompress the data file on the fly. The event reading routines will actually automatically detect compressed files and decompress them, so the specification
	g_data_source_filename = 'nov96_%d.log.gz'
is sufficient.

Note to users of non Hall C applications:. If you are trying to apply these techniques of retrieving data to CODA replay applications other than the Hall C analyzer, you may need to get the improved version of evio.c.


Last update 21 January 1998
saw@jlab.org