Minutes of HTF on 12-April-2000 (ALEPH Week) 1. Yuanning Gao - Yaunning presented a list of the abstracts to be submitted to Osaka. In total there will be 6 abstracts. SM and MSSM at 189 Charged at 189 SM and MSSM and Invisible at 192 to 202 Charged at 192 to 202 Fermiophobic at 189 to 202 Small paper with updated values for all analysis with Y2K data Yuanning also showed the progress of the MC generation. The 204 is already done, and the 206 will be ready soon. Concerning bugs caused by MC production on HP machines, Yuanning stated that none of the MC was generated on HP's. 2. John Kennedy - John Kennedy discussed the best way to pair jets in the 4jet cut analysis. He compared the decay angles (pdf), Mz, the b-tag, Coz, and the Matrix Element pdf (MTP). In addition, John investigated the combination of these variables to improve pairing. He combined the PDF and ME(pdf) into a likelihood, called NID, and also into a linear discriminant Ldis, which can take into account correlations. Efficiency vs. Mh indicates that Coz performed best at high masses, while the decay angles PDF performed nearly as well as was not nearly as mass dependent. The NID performed nearly as well as the PDF, and the effect on the background shapes was next investigated. The background shapes were nearly the same for PDF, NID and Ldis, with perhaps NID and Ldis having slighly smaller background near threshold. Conclusion: more complicated procedures showed no improvement over the PDF method used last year. So this years online analysis will stay with the PDF method. A question was asked about the jet directions: The PDF uses the 4C-fit jet directions, not the original jets. 3. Richard White - The goal is to reduce the 3 jet-like qqg events in the Hnn cut selection. New cuts: 1. max(EM frac of 3rd jet) < 0.9 2. max(Angle between 3 jets) > 0. 3. y23 < 0.1 4. 0.7 < Mvis/Mrec < 1.15 These cuts reduce the background from 16.89 to 12.24 events with the qqg contribution reducing from 2.14 to 0.65 events expected. The CL is increased by 0.6% at the optimization point. Conclusion: Analysis is significantly tighter with some increase in performance. Questions were raised concerning the systematics, especially as these cuts had a much larger affect on the data than would be expected (23 to 14 events.) Also, some wondered if the loss in efficiency (~10%) could be recovered if an optimization was performed. 4. Jennifer Kile - Jennifer presented the Hnn neural net analysis which has merged the ideas of the two neural nets used and combined in last year's analysis. The structure of the neural net will be a single net with 3 output nodes (corresponding to signal, WW, and qqg). This neural net will not include the b-tagging, which will be used as a 2nd discriminating variable. This will ease the use of the analysis for non-SM searches, as well as provide a means to select qqg and WW events for systematic studies. The preselection is augmented by 3 cuts: 1. Energy of Isolated Lepton > 5 GeV - Reduces semi-leptonic WW 2. Energy of Tau minijet < 10 GeV - Reduced W tau nu events 3. Angle of Tau minijet to nearest jet < 25 degrees - Also W tau nu events These cut values have not yet been optimized and may change. There are 9 (or 8) variables in the NN: 1. E30 2. Missing Mass 3. Ewedge 4. Acollinearity 5. Total Momentum 6. Transverse Momentum 7. sin (Acoplanarity) 8. sin(theta1)sin(theta2) 9. e12 E12 is still under study. If similar performance can be achieved it will be removed from the NN to ease systematic studies. 5. Tom Greening - Tom stated that since no improvements to the 4jet NN and Tau analyses have been shown, last year's analyses will be used in this year's online result. Tom also summarized the improvements to the Hll channel previously shown. 1. No WW cuts for ID'd tau events. - 4% lltautau improvement 2. Cutting on Charged Mass instead of Jet mass increases tau ID efficiency from 54% to 82%. 3. qq and WW backgrounds were reduced by 30% by not allowing FSR corrections for events with only 1 ID'd lepton. 4. Changed efficiciency above threshold by replacing Z cut Mz > 77.5 with (Mz>77.5).or.(Mz+Mh>77.5+Threshold) where Threshold is Roots - 91.2. 6. Elizabeth Locci - Elizabeth presented the results for the Charged Higgs analysis. Elizabeth showed that the ALEPH expected results were better than all LEP experiments except for DELPHI in the tau nu tau nu channel, where further improvements are in the works. For the online analysis, there are no proposed changes to the tau nu tau nu and cs tau nu channels. Plans are to use Djamel Boumediene's WW selection (presented in WW meeting) to reduce backgrounds in cs tau nu. Tau nu tau nu planned improvements include taking advantage of polarization and including the LDA as a discriminant variable in the CL. The cscs channel now uses the 5C fit minimum chi2 pairing instead of the minimum di-jet mass. A new LDA with 2 extra variables (making 7) is now used. The two new variables are y34 and ejmin*ang. Performance with 10 variables showed no improvement. Plans are to use the new LDA and 5C fit with the 189 data. 7. Paul Colas - Paul showed the possible improvements for the Y2K data. Paul showed the limit improvements if an additional 200 pb-1 of data were taken. Paul used 202 GeV since the limits are not centre-of-mass limited. The improvement for tau nu tau nu is 2.1 GeV to 86.7 GeV, for cs tau nu it is 1.3 GeV to 77.8 GeV, and for cscs it is 0.8 GeV to 78.9 GeV. Paul also showed how the WW events prevented a limit above Mw in the cs tau nu case with even 400 pb-1 of data & a 90% CL. In other words, it is not easy (or likely) to overcome the WW peak. The study also showed how improvements to the cs tau nu channel were crucial. Future improvements can be expected using the new analysis by Djamel Boumediene (shown in the WW meeting). 8. Jean-Michel Pascolo - Jean-Michel showed the improvements to the 2HDM analysis. The h to gluons analysis is unchanged from the 16-March-2000 presentation. The h to charm analysis suffered from combinatorial problems, and was improved by including a new c-tag neural net into the anti-WW NN. The c-tag variables are identical to the standard b-tag NN, except that the NN was trained with charms as signals and b quarks ignored. As a consequence, b-quarks will be very well tagged by the charm tagger. The NN improves the background rejection by 10% over using only qipbtag. The correct combination is now chosen by making an ellipse around the WW peak. This new analysis greatly improves the analysis especially for masses near Mw. Good agreement is found in data/MC comparisons. For 160 pb-1 at 200 GeV, the expected limits are 74.4 GeV for h to cc and 77.2 GeV for the h to gluons channel. The Hll anaylsis was then also included, unchanged from the SM analysis. The analysis has a typical efficiency of 82%, slightly above the SM efficiency. After combining the Hll channel, the expected limits for h to charm and h to gluons is now 90.9 GeV and 95.0 GeV, respectively. Further work needs to be done on systematics. In particular a number of questions were raised concerning the jet multiplicity dependence in the gluon channel. Future plans include combining with the Hnn channel and running on all the 1999 data, as well as the 2000 data. Eventually a scan of the 2HDM can be performed. 9. Jinwei Wu - Jinwei showed some studies done concerning the 4 jet NN interpolation. For the 2b efficiencies linear interpolation seemed to provide good sqrt(s) dependence. For the 4b channel, sqrt(s) dependence was not clear, and a simple average seemed to be best. For the shapes, a linear interpolation seemed best. The efficiencies were tested using a MC sample at 205 GeV, and indicate that any systematic uncertainty would be less than the MC statistics. 10. Pete McNamara - Pete led a discussion on future treatment of systematic uncertainties. Pete proposed that the likelihood distribution be a good place to introduce the uncertainty. Pete made a strong argument for smearing instead of subtraction (see transparencies). The fact smearing causes small errors may in part be due to the ignoring of correlated errors. Finally Pete discussed a method by which we could include shape systematics (currently taken into account by shifting our CL). After summarizing a number of different options (see transparencies) Pete introduces a method used by L3 which translates a bin-by-bin shape systematic into an overall rate systematic. This method combines the shape systematic and statistical systematic, and can be treated simply by our current treatment (see the transparencies for the equations and method.)