Matthew presented a clean comparison of Monte Carlo and data. He selected two-track events and studied RT,RL and muon-ID.
First he applied a cut P < 45 GeV and on RI (dE/dX) and looked at RT and RL. The agreement between data and MC is good. Next he looked at high energy tracks P > 45 GeV and he observed a large shift in RL, in the MC. The RL distribution from data is well centered. He verified that this shift is seen in the bhabha, the four-fermion, and the WW Monte Carlo. He will contact Marie-Noelle Minard.
Matthew also checked the muon-ID for two-track events. The Monte Carlo show clearly that the HCAL efficiencies are far too high (100%) compared to the data. The error on the efficiency is thought to be low, however, and the HCAL inefficiencies can be re-imposed in ALPHA.
Matthew's aim: to provide a starting point
for RPV searches considering
the operators LQD and UDD. His work is simply an update of a calculation
done my Mike Green (ALEPH 096-068). Matthew did not update the limits
for sleptons, sneutrinos, or squarks since those already are at about
45 GeV from the previous Z-width measurement.
For simplicity Matthew used only the total
width in order to derive
results which are independent of the decay mode. He used the SM lower
bound Gamam_Z > 2489.5 MeV and the latest number from the LEP EWWG
Gamma_Z = 2494.6 +/- 2.7 MeV (ALEPH 96-107) to compute the additional
width: Gamma_new < 9.6 MeV. It was pointed out by Jean-Francois that
Matthew's SM value is too conservative for a limit on SUSY particles,
as he allowed the Higgs mass to go to 1 TeV and that value is far
too high for the lightest SUSY Higgs.
Matthew presented the regions excluded in
the (mu,M2) plane for
tan(beta) = sqrt(2) and = 1.01. A noticeable improvement in the limit
on neutralino production is obtained. The origin (mu,M2) = (0,0) is
not excluded by Z-width for tan(beta) < 1.4.
Given these limits at 45 GeV for all sparticles, the best starting point for the new RPV analyses is the LEP 1.5 data and the data taken this year.
Fabio presented the results from an article
by R.D. Cousins and
V.L. Highland (NIM A320 (1992) 331-335).
Normally the limit on a cross section is computed
from the number of
events observed and an estimate of the signal efficiency and the
luminosity. The statistical and systematic uncertainties on these
quantities are added in quadrature and the result weakened by one
sigma. This gives a conservative bound.
If the uncertainties are not large compared
to the central values,
however, it is not difficult to compute the upper limit correctly.
It turns out that the correct limit is significantly better than the
conservative one. For example, if eff*Lum = 0.5 with an uncertainty
of 0.05, and Nobs = 3, then the naive limit (ignoring the uncertainty)
is 15.50 while the conservative limit is 17.22. The correct answer
is 15.89, which is much closer the the naive value than to the
conservative one. The important caveat is that the estimate of the
efficiency and luminosity must be unbiased.
See Fabio's transparencies for the mathematical details.
These studies have been done in order to constrain
systematic effects
in the very small delta-M region (delta-M = 5 GeV) and uncertainties in
the gamma-gamma background, as already investigated by Laurent Duflot.
Total ECAL Wire Energies. One looks
at the wire energies after the
trigger has fired. The even and odd-numbered planes are summed. These
sums are split, with one piece going to the trigger. The other piece is
retained in the event. Trigger thresholds are applied to the even- and
odd planes separtely.
Anna showed a striking plot in which modules
have fired the trigger
but record energies below the threshold. This is not understood and does
not happen in MC events. These events should not be retained in the
analysis!
The wire energy spectrum in the MC matches
well the threshold specified
in the trigger cards. But small shifts are observed in the data. For
Endcap B a double-threshold is observed, caused by a change in the
thresholds made online during the run. Comparing Monte Carlo to Data
Anna showed offline thresholds Marcello recommends for the analysis.
These will give a well-defined trigger acceptance.
Events from the endcap A x B coincidence show
good agreement between
data and Monte Carlo.
Single Charged Electromagnetic. The
ECAL modules are mapped onto 60
trigger segments, but the ITC information is difficult to extract.
Marcello and Anna loop over tracks with at least two ITC hits. They
take that direction and point to an ECAL module. They consider the
trigger segments to which the module belongs, and take the one with
the minimum wire energy.
This algorithm works well, failing only in
a few percent of the cases.
Again the effective thresholds differ from the nominal ones by many
tens of MeV. The difference cannot be explained as a calibration problem
as the calibration corrections are too small for such an effect.
Again Marcello recommends imposing an offline threshold to obtain
better agreement between MC and Data. The proposed thresholds are:
Anna compared the MC to the data before and
after imposing these new
offline thresholds, using Mvis for gamma-gamma events. There is a small
improvement in the shape. The number of accepted events is reduced
at small Mvis by 10-15%.
Anna presented selections for the four-jet
and two-jet-lepton topologies
optimized for very small mass differences (near 5 GeV).
4J topology. Seven initial cuts are
very standard. Five more are introduced to double-reject the gamma-gamma-hadrons
background.
Four events remain which are not double-rejected. These could be
eliminated by tightening cuts on the Thrust and Enh/Evis, but at
a cost of 15% in relative efficiency. This was not done.
The background consists of two gamma*Z and
two gg-tau events giving and
estimated background of 13+/-7 fb. The efficiency for an 80 GeV chargino
with a 75 GeV neutralino is 15.6%. If an upper limit on the momentum of
the leading lepton is applied, this becomes 10.7% and the overlap with
the 2JL anlaysis is inly 3.5%.
2JL topology. The basic analysis was
presented already by Jane. Three
cuts were added to double-eliminate all gamma-gamma-hadrons events.
The background is estimated at 5+/-4 fb coming from two g*Z and one
gg-tautau event. The efficiency for delta-M=5 is 7.3%
Combined Analysis. These two selection
can be OR'ed to give a total
acceptance of 19.1+/-0.8% for delta-M = 5 GeV. The total background is
17.6+/-8.1 fb. With such a low background one expects to select no
events in the data. Assuming this, the upper limit on the cross section
would be 1.5pb (at 95% CL) to be compared to 7.5pb from the standard
analysis. Anna showed a plot comparing the efficiency of the new
selection to the standard analysis.
This analysis is essentially finished. Laurent
Duflot agreed to
review the cuts critically. Pending his recommendations, this analysis
will be incorporated into CHAMINOU.
Bill has essentially finished a selection which doubles the acceptance for small delta-M stops (delta-M = 5 GeV) while adding only 10 fb of background. His talk focussed on the changes brought about by the requirement that all gamma-gamma-hadron MC events be eliminated by at least two cuts.
First there are a couple of changes wrt his presentation in Clermont. He dropped the cut on Ech/Nch and now requires E(obj)max < 0.1*Ecm. This is justified because the high values of Ech/Nch in the background were caused by a single energetic track. He added a cut Mvis > 10 GeV when there are just four tracks, which is useful against gamma-gamma-tau background.
Looking for events eliminated by one cut only, Bill investigated eightteen gamma-gamma-hadrons events. These are spread over a variety of variables. Bill found that the cut on neutral hadronic energy Enh/Evis < 1/3 is effective. He checked whether the background events contained simply one energetic neutral hadron, and found the answer is no. He looked at the angles of jets wrt the beam, and found that the remaining ggh events tend to give jets close to the beam axis, with one jet on each side. A cut requiring at least one jet to point more than 40 degrees from the beam eliminates these events with almost no loss in efficiency. One event could be eliminated by a cut on the Thrust, but this reduces the efficiency and is not well justified looking at a more inclusive set of events. Bill also tightened slightly some of his standard cuts.
Four events remain which are removed only by one cut. One is eliminated by a tighter cut on the angle of he most energetic object, and the other is close to being eliminated by other cuts. Two remain which have theta_scatt between 2 and 3 degrees. One event has a nuclear interaction which gives too much transverse momentum but no bad tracks, i.e., no visible evidence of a nuclear interaction. This event cannot be double- eliminated without a major cost in efficiency.
This analysis is essentially finished. Bill will look into a few more details and release it for scrutiny by the group.
Laurent gave a progress report on his efforts to back up the cuts against gamma-gamma-hadron events in all standard analyses. In the six standard chargino analyses and the acoplanar jets, he changed the cut on the neutral hadronic fraction (Enh/Evis < 1/3) to a cut on the PTvis calculated excluding the neutral hadrons. This gives the same rejection power and lower inefficiencies (lower by about a factor of two). Thus the common preselection is: acopT < 175 deg, theta_S > 15 deg OR diffa > 5 deg, PT(nonh) > 0.03*Ecm (lowered to 2.5% in 2JL-low). Laurent provided a summary of all cuts and the relative loss of efficiency for charginos.
Laurent is checking that events which are rejected by only two cuts that the cuts are not highly correlated. He gave a list of events which are rejected by only one cut.
In the last minute he discovered that for 2JL-low, it makes sense to make the cut PT(nonh) > 0.025*Ecm OR Enh/Evis < 1/3 which gives no loss of efficiency at all. This variation could be considered for the other selections.
PHOJET Sample: He has produced 50k events and put them on tape at CERN: AY0140.1 through .20 (2.5k events per file). Alex Finch recommends we not use PHOJETS for tagged events, and Laurent shows a distribution of cos(theta_electron) comparing the Monte Carlo generators. They are very different large angles.
Laurent reported that this relatively modest sample of 50k events gives one event selected, in the acoplanar jets anlaysis! With the standard cuts there are five more, two in acoplanar jets and three in the 4J very large mass difference selections. Clearly we cannot take the background estimates from PYTH03 blindly....
Code: Laurent reported that the HCAL part of E12 is wrong in both the data and the MC. A correction for the raw energy was applied twice. Also, he noted some inconsistency in the use of GGKIN: Bill excluded energetic objects while the other analyses do not. He asked that a common formula be found.
Very Large delta-M Selections: Laurent conveyed some results from Laurent Serin who has tried to reduce the WW background coming at 172 GeV. With simple midifications to the standard anlaysis, the 4J topology now has a much smaller expected WW background (25fb compared to 120fb), and the loss of efficiency is slight expect for the 8000 case. The 2JL selection is less easy to improve; the background can be reduced from 240fb to 52fb but with a more significant loss of efficiency. Laurent will continue as his other duties permit.
Peter has made some minor modifications to the anlysis he presented in Clermont-Ferrand with a view toward improving the chargino exclusion. His preselection is straight-forward, asking for one loosely-identified electron. If this electron has P < 1 geV then he relies on dE/dX. He found the estimator to be well calibrated for these momenta. For muons he uses the same preselection and applies standard muon-ID cuts.
Peter generated signal samples which lie outside the parameter space already excluded by Christian using the standard analysis. Using these samples he constructed a Fisher discriminant similar to the one he had before. An important gain comes from using the lepton PT rather than the acoplanarity. The expected background is 70 fb (60 from eeee and 10 from eetautau).
Peter's selection does not veto events with isolated photons. He showed the efficiency for both the standard case and the special case when selectron -> electron + chi2 and chi2 -> chi1 + gamma. In both cases a major improvement over the standard analysis is demonstrated.
Peter ran his selection on the data and found no candidate events. He has extracted the excluded region in the (Msel,Mchi) plane which not comes much closer to the limit Msel = Mchi than before: the exclusion stops at 2.3 GeV from the boundary.
Peter also has completed the selection for smuons. The efficiency and background are identical to the selectron selection. He has not yet run on the data.
The selectron and smuon selections can be combined (OR'ed) to look for charginos. (He allows e-mu final states in the selectron search.) He has calculated the total acceptance for the three-body final state and finds a factor four improvement for Mcha-Mchi = 5 GeV and a 25% improvement for 10 GeV. He will look at the two-body final state using DFGT.
To Do: He will obtain the efficiency for neutralinos and start systematic studies of the background and detector simulation, including the trigger efficiency.
James is studying SUSY photon signals arising
when the lightest
neutralino decays to a photon and a gravitino (which is essentially
massless). So far he has considered the acoplanar photon topology
expected in the pair production of neutralinos. Now he looks at the
associated production of chi+G which produces isotropic energetic
photons. He applies a cut |cos(theta)|<0.7 to reduce the SM background.
From his measurement of single photon production
he can plot a limit
on the production cross section. The actual limit is considerable better
than expected, as one might anticipate given the angular distributions
observed in the data. He noted that the 172 GeV seem to give a
compensating "excess" so this pseudo-discrepancy may go away.
He compared his limit to theoretical predictions and it was seen to
constrain those quite a bit.
He summarized the situation as follows: There
are two models which
give the decay chi -> gamma + G and both could explain the CDF event
naturally. In one (LNZ no-scale supergravity) both chichi and chi+G
production is possible. The latter depends on the gravitino (G) mass.
In this model Aleph places the limit Mchi > 60 GeV independent on MG,
and for MG < 10^-5 eV, Mchi > 105 GeV, in which case this model is
no longer consistent with the CDF event.
In the other model (Gauge-Mediated SUSY Breaking),
chi is a pure Bino
and BR(chi->gamma+G) = 1. There are three scenarios:
The lifetime is governed by the SUSY breaking
scale, sqrt(F).
The experimental limits can be presented in the plane (Mchi,F) plane.
James meausured his efficiency and found it
was easy to parametrize in
terms of Mchi/tau_chi. He compared the experimental limit on the cross
section in terms of Mchi for several tau_chi in the nsec range. Given
the expression for the lifetime in terms of Mchi and F, he finished
with an excluded region in the (Mchi,sqrt(F)) plane.
Michael outlined a scheme for a new, improved CHAMINOU program. There would be two parts: An ALPHA program which creates an ntuple and a stand-alone fortran program which uses this ntuple for evaluating the selection functions and reporting efficiencies, backgrounds and candidates.
Points:
This plan was accepted by the group. A discussion
of the details took
place in the evening, among Laurent, Christian, Volker, Peter, and
Michael. Concrete steps to producing the new Chaminou were agreed upon,
and it should be ready by next week.
Michael Schmitt