This is some pictures of the paddle efficiencies and the fits that I came up with for them. I meant to have one for every paddle, but in the process of doing this, some got overwritten. Most are still here. There were 2 functions that I worked with, and I usually chose the one whose fit had a lower chi^2 unless that one looked ridiculous (goes through all the points, instead of establishing a trend).
These are the functions:

f(x)=fa*(x+fb)**4+fc
h(x)=ha*x**4+hb*x**3+hc*x**2+hd*x+he

Some plots will also have "d(x)" and this is just a line at y=1 to make sure my fits don't go above it. Usually, h(x) gave a better fit and had a lower chi^2. On the x-axis, the variable is "region", which is an integer between 1 and 12. I divide each paddle into 12 regions, with 1 being the positive end (left for X planes, bottom for Y planes) and 12 being the negative end (right for X planes, top for Y planes). There are more details in my logbook entry here
Layer1
Layer2
Layer3
Layer4


The Harder Part

Next, I wrote a little fortran function to be called from PAW, that would just act like another ntuple variable. It makes use of the focal plane variables and for each track, it projects it to each of the scintillator planes, figures out which paddle and which region of the paddle were hit, picks out a parametrization from my list, and calculates the trigger efficiency for that event.

This takes a while to run, since the function needs a bunch of information that gets declared every time it looks at a track. If anyone has any ideas, I'd like to hear them.

Anyhow, after many weird adjustments, my routine actually worked, and here are some pics of efficiency vs hsxfp and hsyfp as well as hsdelta. If the labels are hard to read, the axis that goes for -60 to 60 is hsxfp, and the one that's -30 to 30 is hsxfp.
(Note: This is all run 50636, at p_HMS=-0.86GeV/c).
It's a color plot, a filled contour plot, and a lego plot.



Here are the ones with delta, and they only have 500K events. It looks like the efficiency at the edges is worse than around ~0, but worse at very low delta than very high delta (so bad high up, at high X).

This is zoomed in on eff > 0.9 and it's a profile plot

Here are the same plots, but with a cut on electrons and a looser delta cut
hsshtrk>0.75
hcer_npe>2
abs(hsdelta)<12


If you want to fiddle around with it, it lives here:

/group/hallc_ana/xem/fomin/pass1/replay/run_paw_here/trig_eff.f
and needs this to run:
/group/hallc_ana/xem/fomin/pass1/replay/run_paw_here/eff_vars.cmn

To see how the efficiency varies in the horizontal direction, I picked some deltas and looked at efficiency vs hsyfp for those deltas (+-2% for each setting).
Here are the results


High-X comparison

I then used the same parametrization of the paddle efficiencies to see how a high-X run looks. I chose run 50185, at p_HMS=-3.13 GeV and theta=32 degrees, giving an X_bj of 1.11 (unless I suck with a calculator). This plot looks as expected. Next to it, is a profile plot of efficiency vs. delta with electron cuts, and I think it looks similar enough to the plot above of a uniformly illuminated run. Here, the sharp drop off at high delta is explained by the lack of events there as can be seen from the color plot.



Does this parametrization change over the course of the run?

Dave suggested that my paddle by paddle parametrization of efficiency might be different at different times during the experiment running, and it might be a good idea to look at several uniformly illuminated runs. We have carbon runs at the same kinematics that were done more than once (helium running, solid targets, hydrogen/deuterium running). Unfortunately, there are only 2 sets of runs at the kinematics we're interested in. However, they're spread far apart (one is towards the beginning of the experiment, 50639, and the other towards the end, 51570). Plotting the efficiency for each paddle in each layer, I hardly see any differences, so it's possible that one parametrization is good enough after all. The plots can be seen side by side here .


This is never going to work!!!

I am beginning to feel a little hopeless. This is the latest update.

1. Using the "golden" run (50639), I made a plot of trig_eff.f vs delta, divided it into 240 0.1% bins and got out an efficiency for each bin. I saved it in a look-up table.

2. I then wrote a little kumac that looks at every run, makes a histogram of average efficiency for -12
3. All this method does is reproduce the efficiency of the golden run for each delta bin. With no PiD cuts, the average efficiency comes out the same for every run. With particle id cuts, there are slight variations, since some bins are empty for high-x kinematics. Some trends are reproduced okay, but this isn't good enough, as can be seen here.

4. To get a more meaningful answer, I should be using my original trig_eff.f function to computer the efficiency for every run, bin by bin, since it looks at all the layers, and the hit-position in each layer. This will take some time.

Apparently, it doesn't take *that* long. Here's a partial result .

5. Finally, looking at the 3/4 efficiency for runs at the same kinematics (from the scaler files), we see that it's not staying constant .


Is this getting to the end, my friends?

Here's the latest update.


Summary

To summarize, as can be seen from the plots in the page in the link above, the predicted efficiencies are always lower than those calculated by the engine. I guessed that this is due to the fact that I was using a one dimensional "predictor" model (in delta), when my original one was in 2 dimensions. I assumed that if I put in the full blown 2-D model, I'd recover the engine-given efficiency for the run used to make the model. However, I didn't. The engine calculated .996 and I recovered .9903, so not even close.

I went through my code and added a lot of cuts: made sure that the track projected to an existing paddle (not outside of it, especially in the last 2 layers), and I added a requirement that the 3/4 trigger fired. The reason for this is that when the engine looks at a track, it requires that the other 3 planes fired (to get an unbiased sample), thereby looking at only events that fired the 3/4 trigger. Adding that cut only brought up the efficiency by about .0017, not enough.

However, there is no reason for my predictor to recover the same efficiency as the engine, since they're really using different method. The engine accumulates "fired" and "should've fired" events for each plane separately. At the end of the run, it divides them to get an efficiency for each plane, and then using the 4 planes, it arrives at the 3/4 efficiency (3_of_4=p1234+p123+p134+p124+p234). My predictor, on the other hand, is looking at each individual event, projecting it to each of the 4 planes, and looks up the likelyhood that the given track fired both pmts on a given paddle in a given plane. It then takes the probabilites for the 4 planes and calculates the probability that the given track fired the 3/4 trigger. Those are not equivalent.

In the end, the predictor is more "correct" even though it gives a lower number. The predictor method contains more information: not just which plane fired, but which paddle, and which region in the paddle. So, each region in each paddle in each plane, has some probability associated with it, based on my model. However, the number of events that go through that region is much smaller that the number of events than the engine uses to calculate the efficiency of a given plane. Almost all the events go through a given plane, whereas only a small fraction of them hits a given paddle in a given region in that plane, so the errorbars associated with the predictor are bigger.

The other difference is of the "average of the squares" vs. the "square of the averages" type. Even though the predictor and the engine are examining the same events, the summing and the averaging over them is not done in the same order.

The thing that probably contributes the *most* to the differnce is the correlations between the planes. The engine treats the planes independently, whereas the predictor does not. With the predictor, if the region in the, let's say, S1Y plane where the track projects to has a really poor efficiency (probability of detecting the hit), then chances are, the efficiency in the S2Y plane is just as bad or worse (the track is probably even farther away from the center region). So, since the efficiencies for each pair of planes are correlated, then the efficiency for individual events sees that effect.

So, finally, the predictor is fine, and probably more correct than the engine, but we may not end up needing it.