1.- 1999 Data
-----------
Boris had a first look at the btag with the 99 Z peak data. Things are already in good shape, considering that the alignment is still in progress. At the level of track impact parameters the resolution is visibly worse than last year's final reprocessing, but at the level of QIPBTAG, this is hardly visible. Boris then took the opportunity to mention his single-double tag studies with the 4V neural net. He produced some reweighting functions for the b and udsc jet efficiencies, and when he applied these to the hA analysis, he finds results consistent, more or less, with the previous studies, i.e. a bkg increase by ~12%. This is, however, the best way to include these systematics in the final result and also in estimating the bkg in the 99 data. In fact, Boris produced these functions also for 99 data and they should be available soon.
 

192 GeV DATA REPORTS (with ~6.1pb^1)
==================================

4 jets (David Smith ) 
---------------
No events are selected with last year's cuts analysis with ~0.67 expected based on last year's efficiencies and the 192 GeV cross sections for the backgrounds. One candidate in the nnet analysis with 0.74 bkg expected. This is a nice 4 jet candidate, with one well btagged jet which has a ~5mm displaced vertex. Reconstructed higgs mass ~86 GeV, perfectly compatible with ZZ.

Hnunu (Jen Kile)
------------
No candidates observed. Expected 0.2 bkg events. At preselection, 46.1 events expected, 45 observed. NNet output distribution, within the limited stats, looks ok.

Hll/Htautau (Tom Greening)
---------------------
Again surprise from the leptonic channels. One candidate observed from Hll, consistent with ZZ, with one jet very blike and the other not at all. The surprise is that this event was also selected by the Htautau analysis, ringing the warning bell that the overlaps should be treated correctly in all cases.

4b's (Boris Tuchming)
----------------
No candidates observed, not many expected either! At preselection, ~77 events are expected and 84 are observed. Sum of masses and F distributions look reasonable, within the limited stats.

2.- Analyses
----------

Charged Higgs ->4jets - Progress report (Eliz. Locci)
---------------------------------------
Elizabeth presented the results of an attempt to remove the c-tag from the analysis in order to (a)make it more model independent, (b)remove the main source of systematics (c)avoid the ctag which has not been retrained since the 183 GeV data. Many variables were tried, but studies showed that there is saturation in the performance after the 4th or 5th variable. The new, 5var LDA, has identical performance for a 70GeV signal with the existing LDA, and it is only slightly worse at lower or higher masses, which is no problem since lower masses are excluded and higher masses (close to M_W) will need special treatment anyway. The variables used in the new LDA show good agreement between data and MC; also the LDA itself. -> It is yet to be decided how to treat the systematics, for which studies are ongoing. It was suggested that, if people feel comfortable, they can follow the standard recipie of the neutral Higgs analyses, i.e. not to subtract less bkg, but to smear it according to its uncertainty.
-> The question was raised how the channels will be combined in view of the apparent weaknesses of the combination method used so far. CLFFT will be studied as an alternative. -> It was clarified that the limit would be derived with the signal estimator (although when the limit is set at masses with high signal expectations, this is equivalent to C_s+b)
-> People were urged to try and have final results for the 10th of June.

3.- General
---------

Likelihood Method for Discovery (Cal Loomis)
-----------------------------------
If a Higgs signal starts to appear, there are three important pieces of information to be determined:
(a) what is the probability of a bkg flactuation,
(b) what is the signal mass and
(c) what is the signal cross section.
Cal tried to show how an unbinned likelihood fit can answer the above questions, using as an example the Hqq and Hnn cut analyses and last year's energy/luminosity. For the compatibility with background, a relevant quantity is the fraction of likelihood area below 0 xsection, eta_b. This, infact, has similar properties with 1-C_b, eg. has values between 0 and 1, it is on average close to 0.5 when there is no signal, and has very small values close to the signal mass when there is signal. The only clear advantage over C_b is that eta_b has no dependence on any signal assumptions. Taking the minimum eta_b over a mass range, gives a statistic to estimate the significance of an observation, by looking how often bkg-only experiments would give such a low eta_b. Doing this for experiments with a 95GeV signal, would give the expected significance for the signal. An important consideration in determining the significance of a signal, is the mass range in which one is looking for it. This would have to be determined a priori. Cal showed that if you go for the SM higgs, you are better off restricting to ~10GeV below the kin. threshold. But if a signal pops up at lower masses there would be no way to determine its significance a posteriori. The fitted mass is determined with good resolution, of the order of a GeV. and a reasonable pull, although threshold effects are clearly visible. The cross section has also no bias, unless one requires a certain significance for the signal, in which case the low fluctuations would be rejected. The correlation between mass and xsection is small.

Correlations between mass and nn in 4-jets (Yuanning Gao)
--------------------------------------------
Yuanning started by some checks that he did on the approximation formula used in the current ALEPH combination method. He confirmed what Pete found before, i.e. the expected limit with toy MC experiments is significantly worse (by almost a GeV). The expected limit with CLFFT was 0.9GeV better than the one obtained by the ALEPH method with toy MC experiments. Question was raised whether it is understood why CLFFT (i.e. the Likilihood Ratio -LR- combination) gives so much better expected limits. The answer is related to the fact that the approximated formula used in the ALEPH method to describe the CL distributions from bkg only experiments is not a good approximation. It turns out that if the CL's don't have the shape of the approximated formula (eg. they have an exponential distribution rather than some power law) then the combination with the ALEPH method is not optimal. Question was raised whether the difference in observed limits between LR and the ALEPH method is understood. The answer was that when the proper weights are used in the ALEPH method (calculated with toy MC rather than the approximated formula), while the expected limit changes only by 100MeV the observed limit is ~1GeV higher. Therefore the difference between expected and observed limit in the two methods was ~3GeV in the case of LR and ~4GeV in the case of the ALEPH method, which is not so large. Also, in terms of unluckiness, the probability to get a lower limit with the LR method is of the order if 5% while with the ALEPH method ~2%, i.e. very similar. Eventually, it would be nice to have a distribution of the difference in limits between the two methods in toy MC experiments, to see how (un-)likely it is to have the above "1GeV" difference. Comparing mass alone (1D) and mass + NN (2D) in the 4jets, Yuanning first showed that the 2D adds ~300 MeV of sensitivity. Then he went on explaining different ways to treat/test the correlations in the 2D case. In general, all alternatives would require an large amount of work and give problems in interpolating from one higgs mass to another. As a test, real MC was used to check the expected CL at 95 GeV. The effect was of order 30 MeV, i.e. ~10% of the gain using the second discriminant.

Limits setting with the GRAND Combination (Fred. Badaud)
----------------------------------------------
The cut analyses in Hqq and Hnunu were combined with the rest. Of the initial seven branches some were merged, leading to 5. In the 4jets, due to the use of two discriminants things were complicated. To simplify them two approaches were tried: (a) combine all discriminants into an LDA; (b) use only the mass as a discriminant. Approach (a) takes account of the correlations and reduces the number of shapes needed to the same as in the Moriond result. However, it has several difficulties, i.e. the LDA does not clearly give the optimal combination, and the interpolation between different higgs masses is not straightforward. Aproach (b) also solves the correlations problem (radically!), it is much simpler and seemed to have roughly the same performance as (a). This seemed to be the preferred approach both for the people who did the work and for the audience. Two basic gudidelines were followed: (1) the shapes were fitted rather than "smoothed", and (2) backgrounds with low statistics were used as "event counting" bkgs. These are certainly desirable guidelines to be followed also in the future. The limit curves in the MSSM plane were shown, both with the standard combination method and using CLFFT and the results seemed to be roughly the same as the Moriond results. Detailed numbers were not given.

Discussion
========
In the discussion that followed, it was decided to include the work for the GRAND combination in the 189 GeV paper. The final result would be with the method giving the best expected SM limit. It was also decided to produce the results for the 189 GeV paper with CLFFT.