Very Short and Biased Minutes of

W Jamboree 24/1/2000

PS Auditorium – 9 am to 4:30 pm

If you want to see details and numbers you have to look at the transparencies …….

  1. TGC - OO1 (Stephane)
  2. The code for the standard Optimal Observables method is ready including improved treatment of systematic errors. The 1D results are consistent with what was given to last year conferences. 2D and 3Dfits are under way. Calibration curves are ok as well as combinations between channels and energies. The statistical component of the systematic error is under evaluation. The statistical expected error for all channel is being computed. It has to be pointed out that is technically not easy to compute the systematic error for each individual components for the couplings because the syst are directly computed for the cross section and OO and then translated to the couplings with the likelihood. This method is available for all channels including taus and lvlv.

  3. TGC - OO2 (Gonzalo)
  4. The second order OO analysis is available for the semileptonic (electrons+muons) and 4q channels. The analysis is completed (183+189GeV), including systematic errors, calibration curves and evaluation of expected errors which are consistent with the fit to data. Background has been included. Typical maximum sizes of systematics are 10% of stat error for semilept, can rise to 60% for 4q. In both case the highest errors are from fragmentation (i.e. Jetset vs Herwig).

  5. Extra TGC's (Tim)
  6. The likelihood method of Tim is ready for the semileptonic channel (electrons+muons). The TGC are measured also for CP violating couplings. The pulls and calibration curves are fine. The low angle cut has been moved from 0.95 to 0.90 in cos(theta) for consistence with the other two techniques.

  7. LEP energy and Z-gammas in 1999 (Benjamin)
  8. Preliminary results using 1999 reprocessed data were presented last time and are confirmed by Benjamin. The central values are somewhat lower with respect to the LEP ECAL group nominal values, the combined result is 3 sigmas lower. This, combined with the similar result in 1998, leads to a rather puzzling situation. There is no evidence of forward-backward asymmetry in the result in 1999. The systematic error from angular bias is being computed.

  9. More on fragmentation in semileptonics (Jason)
  10. The method to deal with the fragmentation systematic error by comparing data to MC in the lvqq channel was applied to the mean of the various distributions. Jason has upgraded it taking now in to account the whole distribution, similarly to the 4q case. The variables are re-weighted bin-by-bin to match the data. The result of this reshaping method is very similar in central value and precision to the previous one. These new numbers will be used for the 189 GeV mass paper, as the method is more solid and consistent with the 4q. The preliminary result from Herwig indicates a significant difference between electrons and muons. The differences between the two are mostly related to the locking of objects around the lepton cone, it should be understood if this is enough to account for the observed difference.

  11. 4q mass from 1999 and plans for the Winter Conferences (Jeremy)
  12. A very preliminary result from reprocessed data, using a mixture of old and new 1999 MC (see Jason MC talk) has been given by Jeremy. It looks stable at the 4 energies. The stat error is 92 MeV. Plots are very nice. The plan is to use the same method employed for the current 189 Draft for the Winter Conferences result.

  13. lvqq plans for the Winter Conferences (Helenka)
  14. The methods employed for the paper will be used to produce a preliminary result, if necessary the more easy to produce 1D method will be used. The main worry concerns the fact the analysis is running on the POT, which takes time. The reason is in the MINI the energy flow info is limited, in particular the pointer to the calobject index is not available, making unusable the current version of the "locking around the lepton" routine. The proposal is to abandon this piece of code for the moment and run on the mini accepting the slightly reduced mass resolution. A routine mini-usable should be produced in the medium term.

  15. General N-D fitting code (Oliver)
  16. The 3D fitting code has been extended to the N-D case, to implement a very general code usable also for the 4q case. It has been tested on 4q data, getting very similar result to the standard 2D method when the same variables are used. The binning can be varied and not be limited to the standard "Vegas" case. We eventually expect a reduction of the statistical error, similarly to what was obtained in the semileptonic case.

  17. Monte Carlo production (Jason)
  18. Jason is since last Thursday the W responsible for Monte Carlo production. His task is to help Marcello in the production quality tests (mandatory before the insertion in scanbook) and to collect the requests and needs of the W group for the Monte Carlo production. The present situation for the old 1999 MC prod and the ongoing new one has been reviewed (the new one is the after Geisha bug fix and with final detector mapping) in view of the winter conferences. For the signal we have already enough statistics from the new production, provided that we analyze all 1999 energies together (this is necessary for the stability of the 3D semileptonic fit). We would like to have also 50K more at two further off peak points (just to cross check the linearity, even if in principle the analysis method is the same) and to double the statistics at 192 and 202 GeV (if, for leisure, we want to check separately the energy points). For the background we will have probably to mix old and new production. This is not considered to be a problem, as the effect of the Geisha bug was mostly in azimuth. Eventually everything will be reproduced and more statistics added to the signal, too. One last point is : since we have used the pre-Geisha-bug-fix for the 189 GeV draft we have to compare the two productions to check that the effect is negligible on the W mass spectra.

  19. Common software tool. (Anne)
  20. Several people are contributing to the common n-tple code designed by Anne. The tools is now getting in reasonable shape and first test samples can be produced. The idea is to have a single n-tple for all channels (4q, lvqq, lvlv) and analyses (x-sect, mass, tgc). The analysis will take place in two steps, the first for the general n-tple production (data+MC on MINI will take 24 h). The second for the various additions/subtractions and options which can be different for individual cases. We expect to gain in efficiency, speed and flexibility.

  21. A facility for systematics : KINAGAIN (Brigitte)
  22. A common problem for computation of systematic error with different Monte Carlo model is the need to have a set of events as similar as possible in all respects, with the only difference being due to the effect under study. Until now this was very difficult to achieve, a typical example is the Bose Einstein effect as treated by LUBOEI were in the standard approach the decaying stream is modified with effects for instance in the multiplicity of the events. The new tool of Brigitte allows to modify the kinematics only, which is what we want. Beyond that KINAGAIN allows to re-fragment (JETSET, HERWIG, ARIADNE) and re-color-reconnect (various options). The new tool has to be applied to an existing MC prod, it will produce the new Kingal file ready for Galeph ! At present is running on the KINGAL output itself, the version for the POT is under test. In principle the MINI version can be made. We expect to use this method for future computation of systematic errors from Monte Carlo modelling.

  23. Standard multiplicity analysis with 1999 data (Matthew)
  24. Matthew has started his graduate work by using Nick program, running it on the four 1999 energies. The distribution are similar to the ones already seen at 189 GeV, in particular an excess of soft tracks is seen, especially in 4q data, when comparing to Monte Carlo. Only 196 GeV Monte Carlo has been used (for 4 energies). The systematic error computation is in progress. This method does not have the sensitivity to discriminate between the models which are currently used for the W mass studies. Still the distributions are interesting and results for the winter conferences would be desirable, provided the soft track excess is understood. Roger is planning to extend these studies to Lorentz Invariant variables.

  25. Bose-Einstein with 1999 data (Bolek)
  26. Standard Aleph method. The tuning at the Z has been extended to all years Z calibration data. The distributions show a similar structure, in particular the discrepancy data/mc in the ro region is seen in all years. Reprocessed data are then used for the WW. By combining 1998 with 1999 data the difference between data and the Monte Carlo with Bose-Einstein correlations between different W's is 2.2sigma, to be compared with 2.7sigma with 1998 data only. Delphi method. This method yields nice sensitivity, data prefer no BE between different W's at 4sigma level. The rise at Q=0 for BEI MC has to be understood (it seems it is not there for Delphi....).