[These minutes are provided by Jason Nielsen.] Introduction -- Y. Gao ---------------------- Yuanning presented the general HTF progress of the past few weeks along with the deadlines of the next few weeks. New Monte Carlo was generated, a large part of it coming over the Christmas break. This new MC includes ZZ, Znunu, Zee, and Higgs signal at precise center-of-mass energies. Yuanning thanked in particular Julian, David, and Ning Shao. The precise run selection, including new luminosities is ready for use. We use the same selection as the WW group. Because the combinations for most analyses are not finished, the HTF will request a Thursday meeting to be held on 10 Feb. so that final results may be approved for winter conferences. An HTF meeting will be held on 9 Feb. to release results to the group. The deadline for submitting inputs to the LEP Higgs WG is also 10 Feb.; therefore, those inputs will have to be prepared in parallel with the combinations for winter conferences. The LHWG inputs should be prepared with efficiencies broken down by Higgs decay as was done last year. Hll -- T. Greening ------------------ Hll analysis is unchanged but the Z mass cut is reoptimized to 77.5 GeV/c^2. One candidate is lost due to reoptimization, but three are gained by loosening the arbitrary Higgs mass lower limit to 50 GeV/c^2. The number of candidates observed, 26, agrees with the number expected, 28.5. Several well-btagged candidates near 96 GeV/c^2 lead to a high c_b. Tom mentioned that the efficiency for HZ-> tau tau l l is 60%, but the efficiency for HZ-> b b l l is 80%. This is not yet understood -- in principle, only the leptons are used in the selection. All inputs are ready for the combination; now working on inputs for LHWG. Hnunu cuts: G. Davies --------------------- Gavin and R. White have reoptimized the cuts analysis by varying 4 kinematic cuts and the b-tag cut. The kinematic cuts were not changed from last year's values. The b-tag cut was loosened slightly to 1.05. New Monte Carlo samples were used when available. WW and Wenu MC was not shifted, while qq MC was shifted by 500 MeV to account for the change in E_cm. One new candidate near the Z-peak comes in because of the loosened b-tag cut. Gavin showed the systematic effects studied so far. The effects on the observed confidences are calculated by changing the expected values and rerunning CLFFT. When treated as number-smearing inside CLFFT, the effect of the systematic errors on the calculated confidences will be reduced. Hnunu NN: J. Kile ----------------- Both the Wisconsin and Orsay neural networks were retrained. Wisconsin NN was trained at 200 GeV, and the Orsay NN was trained at 196 GeV. The new working points were determined through an optimization procedure, resulting in a Orsay efficiency lower than last year's and in a higher Wisconsin efficiency. During discussion, Marumi noted that the Wisconsin analysis was trained at two masses (100, 105), while Orsay was trained at only one mass (105). Shan asked about the excess of candidates around 105 GeV/c^2 at the time of LEPC. They were mostly selected by the Orsay analysis and are no longer selected with the new, stricter analysis. Hongbo asked about the rationale for two different analyses, at two different center-of-mass energies, but "complementarity" was cited as sufficient justification. Jennifer showed the studies of the systematic effects, including the usual suspects of b-tagging, jet smearing/correction, cross section, ilpe, ISR, and MC statistics. Paul asked if there were any plans to study the effect of two-photon background. There are not, it being considered unlikely to have a two-photon event with two b-jets giving a high dijet mass. 4-jet Mass Pairing Studies -- J. Kennedy --------------------------------------- John presented the final conclusions of the work presented in December. The work was approved at that time, with results to be used for the winter conference results. John showed the relative performances (c_SE) of the three mass pairing schemes: m_Z, cosmin, and pdf. According to these curves, the m_Z treatment is the worst, while the pdf treatment is the best. The 4C fit was chosen over the 4C rescaling, based on improved mass resolution and the conviction that reoptimization will negate the current difference between fitting and rescaling. Reisaburo asked if the QCD matrix element could be used as a mass pairing variable since it, in principle, contains a maximum amount of information. This will be considered as a possible improvement for next year. 4j and 4b cuts -- D. Smith -------------------------- The 4j (HZ) cuts analysis, now using the 4C-fit, was reoptimized over 4 kinematic cuts, m_12, and the b-tag. A strict Hll veto (with the reoptimized Hll Z mass cut) is employed to remove any overlap between channels. The 199.6 GeV Monte Carlo (or rescaled 200 GeV Monte Carlo where appropriate) was used to obtain a new HZ working point of 40% efficiency for m_H = 107 GeV/c^2. The 4b (hA) cuts analysis now uses only m_h as a discriminant variable. The optimization performed by Boris yields a fvar cut of 364. It was noted that this new optimization assumes the usual thrust cut, while the NN analysis has removed this cut. This unfortunate confusion will be resolved(?) by the 4jet subgroup. The combination of the two (4j and 4b) analyses proceeds via 3 exclusive branches (hA only, hZ only, and overlap) with the overlap treated as hA or hZ in turn. 4j and 4b NN -- J. Wu --------------------- Changes to the NN analysis include: 4C fit instead of rescaling, 19 variable NN instead of 17 variables, and the 2b/4b combination scheme instead of 3 branches. The analysis is not reoptimized -- the background is fixed to last year's background. In the reprocessed data, some candidates are lost due to changes in QIPBTAG values of individual jets. There is a deficit in the mass plot around the Z peak, and in the NN output at high (signal-like) values. Consequently, the observed confidence is _quite_ lucky for Higgs mass hypotheses above 93 GeV/c^2. The systematic uncertainties were recalculated for btag, jet smearing/corrections, and MC statistics. Other systematic uncertainties were taken from last year's result. Some people expressed concern with the significant Z peak deficit from the new analysis and requested that the BEHOLD! analyses be rerun on the reprocessed data to confirm the effect. This will be done before the next HTF meeting. 4b channel and b-tagging -- B. Tuchming --------------------------------------- Boris reviewed the methods used to "d0/z0 smear" the btag to bring Monte Carlo and data into better agreement. He then showed the results of the smearing on the difference between data and Monte Carlo. The plots seemed to show that smearing improves the agreement at high b-tag values, while making it slightly worse at lower b-tag values. For the 4b analysis, Boris reminded all of the new Dtheta3 variable now being used along with the new 4C fit for mass reconstruction. [See also the note above about the thrust cut.] Six events are expected, and three are observed, all of which fall between 160 and 175 GeV/c^2 for the mass sum. Boris also demonstrated how 1-c_b can be used as a discovery estimator. In this case, the median c_b for s+b experiments can be used as a sensitivity estimator. Sensitivity is equivalent to asking the question "For which masses could we possibly exclude the background only hypotheses at the 5 sigma level?" This use of 1-c_b is already common in the LEP Higgs WG, and we should be able to put this number on the BEHOLD! discovery page. Charged Higgs \tau \nu \tau \nu -- B. Fabbro -------------------------------------------- Bernard showed a new, not-yet-optimized tau nu tau nu analysis loosely based on the wwlvlv97 package. The cuts-based analysis looks for 2 jets with charged multiplicity between 1 and 4. Cuts on lepton momentum reduce ZZ and WW background, and a cut on the visible mass eliminates the two-photon background. Finally, cuts on missing energy, missing mass and p_t remove the \tau \tau \gamma background. For this selection, operating with an efficiency around 35%, the dominant backgrounds for a center-of-mass energy = 196 GeV are WW (7.65 evts/ 11.99 total) and two-photon events with leptonic final states (1.38+1.01+0.70 evts). In total 36.04 events are expected, while 36 are observed. Assuming a BR(H^- ->\tau \nu)=1, the expected limit for all data 172-202 is 81.78, and the observed limit is 82.47. When systematic effects (dominated by MC statistical error from the two-photon MC) are included, the limits decrease by about 2.5 GeV. Again, this will decrease if number smearing is used, instead of 1 sigma less background being subtracted. This results is still slightly lower than the DELPHI results reported at LEPC in Nov. 1999 (83 obs., 83 exp.). Bernard can see making some improvements here. Some are short-term projects, like using an LDA, generating more two-photon MC, and refining the systematics in general. A longer-term project is using the tau polarization information to reject the dominant WW background (a la DELPHI and Phil.) In any case, no improvements will be made before the combination for 9 Feb. Hadronic Charged Higgs -- M. C. Lemaire --------------------------------------- The preselection of this analysis has not changed, but the LDA has been trained, and the LDA cut reoptimized. MATHKINE is used for the 189 data, while ABCFIT is used for the new energies. In general, a deficit is observed at the WW peak. The same systematic uncertainties as last year's are used. It was noted again that the systematic effects should be calculated using number smearing and not using 1 sigma less background subtracted. In discussion, Roberto pointed out the new theoretical calculations which give a WW cross section 2.3% lower than the one we are currently using in HTF. It was agreed that all analyses should in principle use this new number. Semileptonic Charged Higgs - P. Seager -------------------------------------- Phil referred the audience to the ALEPH note 2000-001 as a summary of improvements made to this selection, including the calculation and use of the tau polarization as a second discriminating variable. Phil showed some plots relating to the tau selection. There was some difference in the isolation angle of the tau in data and Monte Carlo, and there was a significant deficit in the WW peak. He tried to check this analysis by running it on the 189 GeV data and obtained good agreement with no deficit. Perhaps there is a problem with the fit estimator coming from KINFIT? There was no obvious solution to the problem. Jean Francois pointed out that if the problem lay in the WW cross section (WW being the dominant cross section), then there would be a deficit observed also in \tau \nu \tau \nu, and there is not. Invisible Higgs -- J.B. de Vivie -------------------------------- Only the hadronic analysis has been reoptimized. The hadronic NN has been retrained, and the E_12 cut has been optimized. A sliding analysis is used for 196, 200, and 202 GeV, but a fixed analysis is used for 192 GeV. The studies of systematic uncertainties and the scan of the invisible Higgs (m_H vs. BR) plane may be finished for the 9 Feb. meeting. Fermiophobic Higgs -- A. Tilquin -------------------------------- Andre summarized the analysis and results of fermiophobic Higgs at all energies. The systematic errors were recalculated for this year. In the future, Andre will consider using a neural network to optimize the selection. He also plans to start work on H->WW and H->ZZ searches, noting that this will be useful also for Standard Model Higgs searches with masses above 110 GeV/c^2. Conclusion ---------- After seeing all of the results, especially the deficit in the Z peak in several channels, Tom emphasized that we should try to push through a ZZ cross section measurement, maybe using the existing analyses. It wasn't clear who would take charge of this effort, since the usual people are either busy or gone.