Responses by Cristina Riccardi (CR), Marcello Maggi (MM), Marcus Hohlmann (MM), Jay Hauser (JH), Roumyana Hadjiiska (RH), Archana Sharma (AS)

GEM TDR

Overall Comments:

The document is well organized, written and well edited. Overall it creates a good impression of the project, and appears to be very thorough. I cannot say that I have done it full justice in my reading, partly due to the time of year and timing of its release, as well as other work pressures on my time, so my comments will certainly be selective and I may not have studied it deeply enough to have a complete understanding.

I note that a large part of the document is devoted to the GEM chambers: over 40% of the main body, with much detail. As I am not an expert on gas detectors or their assembly, I can simply acknowledge that I was impressed by the depth of detail and the attention given to important issues of production, assembly and quality assurance. Where I could compare this with experience, I did not see any grounds for concern.

I note that overall there remains quite a lot of R&D to be completed in all areas of the project, including aspects of the GEM work. I cannot really comment quantitatively on this, except to say that I am sure this remains a challenging project to complete, despite the progress which is evident. I can believe that for some areas this should not be a cause for concern, provided attention is given in a timely way, e.g. alignment. In other areas, such as finalization of the chambers, and the trigger, I would be concerned to ensure that such activities are resourced as soon as possible, as I do foresee some challenges.

The case for strengthening the muon detection, triggering and reconstruction in this awkward region for CMS appears to be a good one, although the absence of section 6.4 makes it difficult to judge what might be the cost-benefit ratio. I expect that case will be made and I would be more concerned about ensuring that delivery of the objectives is achieved, i.e. that there is no weakening of the expected performance if further R&D or, more likely, implementation of the full detector system proves to be challenging for the GEM collaboration. Although a sizeable number of groups are involved, it is hard to assess the strength across the project. From the emphasis on the GEM chambers I would hazard a guess that there is more strength in construction and assembly, and GEM data analysis, than there is in some other areas of the project. I would therefore expect that some areas will be (are already) weaker in effort than desirable and I would question whether that is true in some key areas, such as the readout system and the trigger.

This also raises the question of whether the effort in the project is optimally distributed.

The phase of the project where it has been in the past certainly focussed on detector readiness, now it has majorly shifted towards the on and off detector electronics and DAQ with new groups and resources become becoming available (AS)

I understand the concern about the choice of gas and its potential, or not, for global warming. However, I could not judge if the effort spent searching for an alternative, which will not be complete before the end of 2015 (and would that answer be definitive, and not subject to more investigations, or raise new engineering issues?) might be better used on other items, given that an environmentally acceptable baseline does exist. As a non-expert in gas detectors, I was not able to evaluate in the time available, the practical consequences of the different choices (e.g. cluster size, timing resolution, risk of discharge, environmental consequence,…). If this information is available, perhaps it might be tabulated in such a way as to guide the reader to understand the need for this, and that no new issues (gas system engineering, for example, or unwanted incompatibilities with CMS systems) will arise once the gas is selected.

Do such issues (e.g. cluster size, or discharge rate) have any impact on the trigger performance?

The effort on an alternative gas is a relatively small one currently being pursued by only one group (Frascati), so it is not a big drain on resources within the collaboration. The impact of different gas mixtures on basic performance parameters is not known, yet, since we still need to perform exp. studies with the more uncommon CF4 alternatives. We note that other gaseous detectors in CMS, e.g. CSCs, face the same issue with CF4, so it needs to be addressed by CMS eventually one way or another. (MH)

In contrast, my impression is that there are some areas which are less well resourced, and are likely to prove challenging for the project. I certainly gained this impression from the chapters on the electronic readout system and the trigger, mainly because of the absence of similar detail to the chamber chapters. At the LHCC review level, this may not be a major cause for concern, since I suspect that there will be more reviewers there who will focus on the detectors, and on overall performance, and physics, issues. However, it could be a concern once the project is approved and the full scale practical implementation begins.

Comment to Reader: Human resources are indeed very valuable, in particularly experienced designers. The collaboration has been relatively successful during recent years in attracting and building the human resources for the ASIC and hardware design, firmware and software. This process is continuing and we expect will improve further following approval of the project and the TDR.

In a few places, the compatibility with the future, i.e. HL-LHC, was mentioned. It is a bit hard to assess this since, as this is the first new system being constructed and it is out of phase with the full upgrade, one could question whether there is any risk of incompatibilities arising, because later changes will occur elsewhere in CMS. I presume that off-detector changes (except latency or total trigger rate) can be adapted to, and therefore this should not be a concern. I understood the GEMs will be compatible with eventual particle rates and radiation exposure. However, there were a few issues related to the trigger implementation which I mention below.

The potential imbalance between activities seemed also to permeate the organization chapter. Figure 9.3 which shows the project schedule seems to be dominated by chamber production and contain little else. In view of the fact that, to my slight surprise, the chambers do not dominate the cost at all, this appears to suggest that the most expensive elements of the project are not receiving enough attention at the top level.

Comment to Reader: The schedule for VFAT3 and for the prototyping of key electronics modules in given in chapter 9.

There seemed to be a lack of emphasis on full system tests. This could be crucial for operation. I think there are lessons from the present trigger upgrade and the original tracker (as well as elsewhere probably), which is the need to be ready well in advance and have (sizeable) contingency for commissioning. It was not clear that this is the case.

TDR change: A paragraph has been added to detail the full system test setups currently being realised and the plan for the future.

As the document was so well prepared, I spotted few typos or other such errors, and therefore mostly did not bother to list those I did find, with a few exceptions listed below.

More detailed remarks

I think it is too soon to be unduly concerned about compatibility with the foreseen track-trigger. I would simply note that in comments, such as those from L197 onwards, the difficulties of using all the information which can be provided, in real hardware, are a long way from being addressed. Whether the assembly of data from the muon detectors, of several different types and with different routings and types of information, will exacerbate such challenges I do not know.

It needs to be discussed; discussions have often been raised about the utility of GE1/1 once there is a new tracker and track trigger (JH).

L237: typo

Fixed (JH).

Are some figures large enough to illustrate the text? Eg Fig 1.3 right, and Fig 1.4 are very small and do not seem to do more than provide illustrative diagrams, which may be useful.

STILL NEEDS TO BE FIXED.

L265. Refers to VFAT3 submission end 2015. If needed another in 2016. Production early 2017. This looks aggressive unless very successful. Although VFAT2 is available, I would expect more time to be needed for detailed evaluation of performance of both the system and detectors and to ensure that there are no small (or large) issues with the VFAT3 under realistic conditions.

Comment to Reader: The VFAT3 chip is well advanced. All digital blocks have been described in Verilog and most analog blocks have been simulated at schematic level and are in the process of layout. A few sensitive analog blocks have been prototyped in silicon to verify the conceptual approach. The design strategy is to submit the full VFAT3 chip. This can be done providing it features extensive testability such that each module can be tested independently to the other modules. Full chip verification is also crucially important before a submission. This was the approach used for the VFAT2 design and proved very efficient. The schedule is certainly ambitious but we believe it is achievable and have scheduled and budgeted for two engineering runs before going to production to be in time for LS2.

How can it be certain that the late choice of gas will not show up unexpected issues?

Comment to Reader: Changes in the gas mixture can affect a little the charge collection time and total charge for a mip. VFAT3 will have programmable shaping time and gain so that the best combination can be selected once the final gas and detector characteristics are known. The choice of CFD to recover timing resolution is flexible for variations in the amount of charge collected and its collection time.

I did not find a mention of the chosen technology. Did I miss it? It should be given, and deserves comment (e.g. if 0.25um safe, inexpensive, well known rad hardness, if 130nm cost of submissions and schedule issues, radiation qualification, acceptance testing).

Comment to Reader: A policy during the design of VFAT3 was to keep the design as technology independent for as long as possible with the IBM130nm CMOS process being used for initial analog prototypes. After extensive study of different technologies, TSMC130nm was chosen as the final technology for VFAT3. The considerations studied were : Cost, accessibility to the process (both at present and for the long term), turn around times for processing, ease of design, similarity to IBM130nm, design kit development, access to IP (intellectual property) blocks and radiation tolerance. A review was held in November 2014 and TSMC130nm was chosen as the most favourable technology.

L275. I learned that the project will use μTCA and the Imperial College MP7. Although I know there have been some (very limited) contacts with our group, this cannot be assumed without much more discussion, and is likely not to be straightforward. By early 2015 we will have procured almost 100 MP7s for the trigger project. These are required and there are no significant numbers available for other users (although one module for 2015 tests should not present difficulties). We do not at present plan further orders and they would require a complete new tender, since we have reached the limit of spending on orders according to CERN rules. Nor do we wish to take on the load of more orders at present, or become a vendor for a wide community. Resources do not exist to do this. So this is a troublesome requirement, which needs exploration.

Comment to Reader: We are aware of the issue and do not wish to add to the already large commitment of Imperial College. However, it is our policy (and indeed the policy of the LHC experiments) to use common developments where possible. The MP7 is a very useful and versatile board which is ideal for use within the GEM project. The production of MP7 modules (further to the original order) is currently under investigation by the CMS electronics coordinator together with CERN purchasing with the CMS GEM project identified as one of the interested clients.

L324. It is stated that the Virtex-6 FPGA (on the OH) is rad-hard to a few Mrad. There is no reference for this statement, which is an important one. I expected to find more detail on this in the electronics chapter, which does not seem to be the case. What is the evidence for this, and what does it mean in practice? For example, often such FPGAs are refreshed (or “scrubbed”) to restore the contents (possibly gradually, at suitable intervals). This may not be compatible with operation in CMS, which effectively has a 100% duty cycle, although perhaps beam gaps could allow it. In any case, without further explanation, the use of these FPGAs may not be straightforward, so more information is needed. It also appears that newer Kintex-7 parts might be a better choice, but that also has implications if the OH is not a GEM design.

TDR change: A paragraph with a reference has been added in chapter 4; Very forward muon trigger and data acquisition electronics for CMS: design and radiation testing J. Gilmore et al., JINST 8 (2013) C02040

Fig 2.3 shows a huge variation in Lorentz angle with E-B angle for different ExB. If the 90° orientation is irrelevant, since GEMs only experience 8°, this is confusing for the reader (or this one, at least). As commented earlier, the impact of gas choice is not so clear (to me) and perhaps should be.

The 90 deg plot is shown because later in the text we describe a beam test in a magnetic field. That test was done at 90 deg to compensate for the fact that we were forced to use a lower field in the test (1.5T) than the GE1/1 will experience in CMS (3T). (MH)

L531. Other high-rate experiments demonstrate that GEMs are a robust and reliable technology. I was not completely convinced by this statement which refers to experiments with very different operating conditions than CMS. Similarly, I found the statements about TOTEM (small number, special location) and LHCb (how many and where) not fully reassuring. Does this paragraph add anything without giving more details, or being qualified in some way? Alternatively, the GEM location is quite inaccessible. Can this possible concern be addressed?

The operating conditions in TOTEM and LHCb are more challenging than for the CMS muon system. The rates in LHCb are 100 times higher than in CMS because their M1 muon station is IN FRONT of the calorimeter as stated in the text. The number and location of LHCb chambers are clearly stated in the LHCb paragraph. The GEM location in CMS is as inaccessible as the ME1/1 location; the concern consequently is as large or as small as for an existing system. (MH)

L753. Discharges are inevitable. They have a limited impact, by only destroying a segment. Is this reassuring? Perhaps I missed it, but has the potential damage and risk been assessed, and is it explained in the TDR? Has a study been performed? Are the results cited?

Yes, they are inevitable in all gaseous detectors. A rare event with primary ionization fluctuating to very high values can induce a discharge. It is a question of the rate at which they occur. If the total charge that can flow during a discharge is limited by design as is done in the GE1/1 design we expect that no harm is done to the detector during the discharge. (MH)

Chapter 3

The VFAT3 ASIC is discussed at some length but a few obvious points seemed to be omitted. What is the technology? What is the size of the chip?

Comment to Reader: The VFAT3 technology is TSMC130nm. The current floor-plan for the chip measures 9.15mm x 10.6mm = 96.9mm^2.

At what stage is the design and layout, and submissions (i.e. of any sub-circuits)?

Comment to Reader: The status of the design is as follows : Preamplifier & Shaper : 1st potential design done and fabricated as a test chip by CEA Saclay. A GEM dedicated front-end design is underway and is based on the ABCD front-end design with parameters adapted to the GEM signal characteristics. CFD: Prototyped on MPW IBM test chip, recently received and tested successfully in Bari. CBM: Main components prototyped in an MPW (IBM) test chip, recently received and currently under test in Bari SRAMs: Initially designed and fabricated in IBM as part of the ABCD chip development. Options for TSMC include a commercial SRAM compiler and a dedicated radiation hard compiler currently under contractual development. Both are currently under study. Control Logic, Trig Unit and CommPort: Design at the verilog level is complete. Slow Control unit: Design at the verilog level complete. ADC: ADC, initial prototype in IBM130nm by AGH, Simulation: System simulation exists for the digital part, currently adding analog blocks in verilog AMS. All blocks currently undergoing conversion to the TSMC130nm process.

Which parts already exist without requirement for change from the VFAT2, if any?

Comment to Reader: The overall architecture is similar to VFAT2. However all blocks have been redesigned to the VFAT3 specifications and technology.

What is the expectation for the design and evaluation schedule?

TDR change: The schedule is given in chapter 9 and has been updated..

Plans for assembly of electronic-hybrids, and assembly on the chambers?

Comment to Reader: The plans for production and assembly are currently under investigation with some (but not all) industrial partners selected.

I expected the radiation levels to be worse than 1 Mrad, so I started to look for the figure to confirm this. I did not find it. In the past it was traditional to include a chapter on the CMS radiation environment, which may be wise here in view of the very long term operation. Perhaps a short section summarizing the environment, including SEU issues, could be useful?

Comment to Reader: The radiation environment expected for GE11 is the following : Maximum total dose after 3000fb^-1 of delivered luminosity = 1kGy (100kRad) Flux of all particles at instantaneous luminosity of 5*10^34cm^-2s^-1 = 1-10kHz/cm^2. TDR change: Plots need to be added in Chapter 6.

A list of figures and tables might also be useful to some readers.

L1121 shaping time increased to avoid ballistic deficit. Although this is discussed, I did not fully understand if bunch timing resolution is affected, or the operational conditions. Is the occupancy explained? Are there pileup issues? For example, under what circumstances can operation be with a long shaping time? Why does not this cause pileup and signal separation problems? Have the conditions been simulated, or included in simulations? Why is ballistic deficit a concern, i.e. compared to individual bunch timing?

Comment to Reader: Occupancy, whilst expected to be very low, will also vary on the position in eta. To allow maximum flexibility, VFAT3 has been designed with programmable gain and shaping time which means that it can work with extended shaping time if the occupancy (and hence susceptibility to pile up) allow it. If not the VFAT3 can be operated with 25ns shaping time in the same way as VFAT2.

L1354. I note that 8 MP7 boards needed to read out entire GE1/1 system, which is not many (although please see my earlier comments). However, chapter 9 appears to cost only 1 spare board, which I would regard as absurd. Although management of spares might fall on M&O B so need not appear in the detector cost, I hope a single spare board is not considered acceptable. I am sure other MP7 (or alternative!) boards will be needed for prototyping work on the trigger as well.

Comment to Reader: The costing for the entire electronics was done in great detail following strict guidelines from CMS, this includes consideration for spares. The guidelines template generally works well and allows different projects to cost their systems in a comparable manner. There are certain situations however where it would be sensible to go beyond the guidelines and the MP7 falls into this category. Hence “yes” we would order more spares than listed here.

I was expecting to find clarification of any L1 rate issues but did not find this explained. Have the issues related to assembly of the L1 trigger information been studied and simulated? Are the trigger algorithms designed and implemented in software? Are they compatible with assembly of trigger data from a series of different detectors, e.g. are the data formats all different, or the same, and is there a model for processing the data to reconstruct the trigger muon candidates? Perhaps the details given in chapter 4 (tables 4.4-4.6) are useful to some readers but were not to me. I was looking for higher level explanations about how the data are assembled, where and in what latency, for example.

I was looking for comments on the development of firmware and software, which have proved to be significant efforts for the present trigger project. I was somewhat unconvinced by section 4.5.2 (“designed according to the general scheme for CMS online software”). I am aware that this has proved to be a big task for the L1 trigger upgrade, which has been under-resourced and underestimated (and probably elsewhere, but perhaps now mostly a historical problem for detectors in operation, like the tracker or ECAL).

Similarly, algorithm firmware development has not proven to be easy for Virtex-7 FPGAs. There are only a few experts. It is possible that by the time this begins for the GEMs, there will be greater experience but that does not yet seem to be the case. Perhaps the GEM algorithm requirements are very simple? While in other applications of the MP7, the firmware infrastructure has been provided, this may not be relevant here, and the

TDR change: A paragraph has been added on firmware requirements and development path. Comment to Reader: The GEM trigger algorithm is indeed simple: neutrons and photons will typically hit one GEM of a superchamber. On the MP7 a LUT will be used to keep only pairs of hits correlated in time on both GEM of a superchamber and consistent with a muon track coming from the IP. More than one order of magnitude in data reduction should be reachable.

About firmware infrastructure, as said above we do not wish to add to the already large commitment of Imperial College. The GEM users are aware that the firmware infrastructure will be different than the other applications and are ready to accept this task. Among the GEM electronics designers we are gaining experience with Virtex-7 FPGAs.

GEM users should in any case be ready to accept this task. The TDR has no real description of any of these issues, which gives the impression that it has not yet been considered.

These issues also emphasise my concerns about system tests, which should if possible include operational activities, i.e. not just focusing on the GEMs and their performance.

TDR change: a section about tests has been added. Comment to Reader: The main system test for electronics, trigger and DAQ as well as integration with the CSC system is in building 904 where we have a GE1//1 detector equipped with the 1st version of all the electronics components (except the MP7 which is replaced by a GLIB) on top of a CSC chamber. The first tests of synchronization between both systems have been successfully achieved in Fall 2014. By mid-2015 the teststand should be equipped with the version-2 of the GEM electronics, similar to the one that will be installed in 2016-2017 for the Slice Test. In addition 5 laboratories will be equipped with the full new readout system from Q2 2015 to thoroughly test the system as well as to develop the firmware and software frameworks. These test facilities will all benefit from the involvement of experts from the CSC community.

Unfortunately, there are still restrictions on the distribution of radiation hard front-end electronics, which may affect users who are building chambers. How are they tested and assembled, without FE electronics? Has this been taken into account in the planning.

Comment to Reader: We are very aware of this issue and it has been taken into account when attributing collaboration tasks to institutes.

Chapter 6

There are few typos. It is usual to say "long lived" not "long living", which is repeated several times. L1892: table number
CR: Done

Simulations are clearly important at the GEM level. It is difficult for the reader to judge and perhaps a delicate issue to validate them. How is it possible to be sure that signal development is well simulated and the detectors will perform as in the simulations? Can the simulations already be compared with working chambers?
RH: The signal response modeling uses a simple method based on the the results derived from the test beam studies and standalone simulations with FLUKA and GEANT. During Run1 data taking, similar simulation model has been already used (and it is in use now) for signal response modeling from RPCs and the MC results have been validated with the experimental results.

As remarked earlier, has higher level processing (i.e. trigger data flows and latency) not been studied? I note section 6.2.1.

The GEM system requires its own controls. I could not judge how much has to be invented for this project and how much can be harmonized with the rest of CMS, which is important for both development and long term maintenance. This is discussed in section 8.2.3 for the electronics but I am unable to judge what effort is required to implement this and how much of it is compatible with other parts of CMS. It is clearly important to minimize the amount of new work if possible and to be compatible with the rest of CMS.
MM: Chapter 8 is organised to describe the CMS environment first. We aim to minimise the amount of new work, but we are still in the design process so somehow we are forced to be a bit "generic".

The project schedule has little on electronics or trigger as far as I can see. This is a concern. The emphasis appears to be on chambers but the electronics represents about 50% of project cost, plus power. My feeling is that the distribution of effort is not in this proportion either, which it need not be. However, it clearly must be sufficient. Is that the case?

Comment to Reader: The schedule is specified in chapter 9 with Electronics and DAQ included, the focus of the collaboration is now shifting towards preparation of the Slice Test which is a step in the direction of having a commissioned system. We aim at preparing all components (h/w, f/w, s/w) in the direction of the final system as far as possible. %ENDCOL

Replies by:

Paul Aspell, Gilles De Lentdecker, Marcello Maggi, Cristina Riccardi, Anna Colaleo & Archana Sharma 29.1.2015

I was satisfied with many of the answers to my comments, but not all. Some of the responses were to me only, and did not seem to imply any amendment to the TDR. This may be satisfactory but it does not convince me that those issues are resolved.

The main issues were:

(1) VFAT3 and electronic module development I had seen the section in chapter 9. It may be sufficient for the TDR but it is insufficient to form a judgement of the realism of the schedule and the resources available to deliver the system. I note the response that “The schedule is certainly ambitious but we believe it is achievable”. PA: The manpower available for VFAT3 design is 1 PhD student, 2 engineers from Bari and myself from CERN. The current status for each module was mentioned in the previous note. In general terms all VFAT3 modules (except the front-end) are designed and simulated and now in the layout and synthesis stage. The front-end is currently under design as an adaptation of an ABCD frontend.

We have 2 strategies to optimize the design time. The first is to submit the whole chip in one go. In order to do this the chip design has to be done such that each module is testable in its own right independently from the other modules. This was the approach used on previous chips such as SAltro and VFAT2 and proved to be very time efficient. The second approach is as follows: We have scheduled for 2 submissions. In order to be able to move to the first submission as soon as possible we plan to use a standard SRAM compiler from ARM for the SRAM memories and the standard I/O libraries within the TSMC design kit. Recent radiation results indicate that these standard building blocks are likely to be sufficient in radiation tolerance for the GE1/1 application. In parallel the CERN microelectronics group is organizing for a “radiation hard” SRAM compiler to be designed under contract. Once receiving the first VFAT3 submission we will irradiate the entire chip. Those results will determine if a second version is necessary and if so will it require the radiation hardened SRAM.

(2) Use of the MP7 I note the answer. I favour common developments to be utilized in CMS and I agree that the MP7 may be an excellent choice. However, this does not in itself provide resources to allow this and our experience to date in attempting to share such items, and support them, is that little help is provided by CMS, or other CMS users, to ensure that. “Investigation by the CMS electronics coordinator [who has few resources] together with CERN purchasing with the CMS GEM project as one of the interested clients” will need careful evaluation. Imperial College staff will be crucial for this, and they are extremely heavily loaded throughout 2015 at least. GD: We fully understand your concerns and really do not wish to add to the already large commitment of Imperial College. We indeed need to evaluate with Imperial College how the GEM community could help to provide resources to ensure the availability of the MP7 board needed for GE1/1, once approved.

(3) Radiation environment in CMS Having confirmed that I did not overlook a description of the environment, I suggest that it is essential that this should be included in the TDR (and in all TDRs, as in the past). It was unclear to me if this was the intention.

GD: Actually I asked Ch6 editors to add a figure about neutron dose and some text: “After accumulating 3000 fb−1 of integrated luminosity, the total dose amounts to 1kGy (100 kRad) at the highest eta region of GE1/1 chambers“

The text has been added (line 2107)

(4) Radiation hardness of the FPGAs. I have seen the new paragraph added in chapter 4. The reference JINST 8 (2013) C02040 refers only to SEU testing and not much to total dose hardness (up to 30krad, if I am not mistaken). I think the reference to Mrad hardness has been removed. Maybe the important thing here is to ensure coherence in stating the requirements clearly and showing the components will withstand them. GD: We will try to clarify the requirements: the total dose increases with eta; the VFAT3 chips are placed all along the GEM up to the highest eta part of GE1/1 (total dose < 100krad) while the opto-hybrid is sitting at the lowest eta part of GE1/1 (total dose < 10krad). For the opto-hybrid will use components tested by our CSC colleagues. The opto-hybrid and the VFAT3 will be also tested for dose hardness. // again if the figure could be added in chapter 6 this would be clearer, and we will try to do this. NB I noticed that the document uses Mrad and MRad (which is incorrect). GD: Actually indeed there is a typo.

(5) I will be interested to see evolution of the implementation of the readout and trigger, including firmware and software. While the material in the TDR is a bit thin, I do not insist that it is essential, only that experience has told us that this is an area where optimism is high and so far not matched in most cases by deliverables.

GD: As written in section 4.5.4 for the December 2014 test beam the whole readout chain has been tested (GEB+OH+uTCA (GLIB)). All the functionalities to control the electronics and to readout the data have been implemented in firmware and software in less than 1 year. The developer team keeps growing and a large part of the firmware and software developed for this first prototype can be re-used for the next versions of the hardware. We are aware that we still have a lot of work ahead and we are setting up a Task force to tackle these issues for the end of 2016 and the Slice Test. There will be two other deliverables before that time: the set-up of 5 test-stands with the next version of the hardware (end of Q1 of 2015) and the set-up of a large cosmic test-stand at CERN by end of 2015. In addition there is the CERN 904 GEM-CSC integration facility which already started. Finally, now we are starting the implementation phase and we will ensure a certain contingency on the delivery plan, monitoring the evolution and progresses. This process has already started and we have scheduled internal review with Christoph Schwick (CMS DAQ Coordinator) at the end of February (23rd)"

Edit | Attach | Watch | Print version | History: r11 < r10 < r9 < r8 < r7 | Backlinks | Raw View | WYSIWYG | More topic actions
Topic revision: r11 - 2016-01-03 - MichaelTytgat
 
    • Cern Search Icon Cern Search
    • TWiki Search Icon TWiki Search
    • Google Search Icon Google Search

    MPGD All webs login

This site is powered by the TWiki collaboration platform Powered by PerlCopyright &© 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
or Ideas, requests, problems regarding TWiki? use Discourse or Send feedback