by Federico Ronchetti. Published: 28 October 2011

The ALICE experiment is dedicated to heavy-ion physics. However, the beam time allocated to heavy ion collisions, by the LHC accelerator, is limited to between mid November and the beginning of December. As a result, this time of year is the most crucial for data collection for ALICE. It is expected that the many different subsystems of the experiment will be in the best operational condition for this period, and so the right time to perform a vast series of tests and optimizations on the detector is, in fact, during October.

Federico Ronchetti

ALICE control room with Federico the October PRC (foreground)

At the same time, the LHC accelerator itself often changes its setup, ‘interleaving’ periods of machine development (this is what accelerator people call it when they fiddle around with the machine, changing the optics setups or the beam time structure) with luminosity production (i.e. the regular collision regime for the experiments to carry on collecting data).

As ALICE period run coordinator (PRC) I have to attend each (early) morning the daily meeting for the machine operations at the CERN Control Center in Prevessin, where the accelerator coordinators show the status of the machine for the past 24 hours. Then I must plan for the coming day. I collect the information and match the experiment setup with the proposed LHC accelerator activity for that day in order to maximize the data collection or the testing efficiency.

Scheduling and managing all the required tests and optimizations produces a continuous flow of configuration changes for the ALICE experiment which puts a lot of pressure on detector experts and on the shift crew. This stream eventually manifests itself as a nearly continuous ringtone coming out from the PRC phone 18 hours a day.

The preparation activity that has been carried out during this month was targeted on the readiness of the detector to cope with the large event size generated during heavy ion collisions. In fact this kind of collision produces several thousands of particles which have to be detected in ALICE. The information about the collision event (its “photo”) has to be recorded in order to be analyzed offline, therefore at a time later than the actual moment of collision. The more complex the collision event, the higher the resolution needed for the “camera” (ALICE) to take a “photo” that resolves the details of the event. Ions are much heavier and more composite than a single proton which increases the complexity of heavy ion collision events.

One main contributor to the accuracy of the “photo” of the event is the TPC (Time Projection Chamber), which measures the tracks of the charged particles produced during the collision. One trick to reduce the requirement needed in terms of network band width and storage in order to handle and record the event on tape is to compress the data on the fly (right after each collision happened).

For this reason a large effort has been done to implement and apply a highly sophisticated and customized compression algorithm to the collected data using the computing power provided by the ALICE High Level Trigger (HLT), which is a large computer cluster able to perform online event compression, selection and monitoring.

The HLT runs the compression algorithm on the original TPC data (called clusters) and substitutes them with its “compressed clusters” in order to shrink down the overall bandwidth requirements of the data recording.

In practice, during the heavy ion run, the original TPC cluster data will be discarded and only the compressed ones will be stored. A small fraction of the original data will be kept for cross check purposes. This procedure must be absolutely reliable and accurate and the quality of the compressed data must be such that all the subsequent data analysis tasks can be performed as if they were acting on the original data.

A large part of the time was then used to perform a full quality assessment of this procedure, by running the TPC and HLT in different configurations using proton data to make sure that the overall system is stable and reliable and no information is lost when we throw away the original clusters during the heavy ion running.

But TPC was not the only baby we had to take care of. A very extensive effort has been done to activate, test and validate the various electronic setups of the different ALICE subsystems, which perform event selection. In fact, most of the collisions produce events, which are not very interesting because they have already been seen many times. We look for the signature of the so-called “rare” processes, so ALICE has an electronic setup, which pulls the trigger (open the shutters of the camera, using the photographic metaphor) only when a defined signature (i.e. specific pattern of particle hits in the different detectors) emerges from the collision. ALICE has multiple levels of triggers (“shutters”) which operate in a chain. The basic level is called L0. The logic selection gets more refined as the trigger level increases. So October was also utilized to activate and validate a second level of selection (called L1) for many detectors to make sure that the L1 trigger response would be fast enough to operate efficiently.

Finally, some subsystems change role and functionality when the accelerator switches from the proton to the heavy ion running. For example the collision rate is measured by one subsystem (called V0) during proton running and from another subsystem (called ZDC, Zero Degree Calorimeter) during the heavy ion running. In conclusion, the life of the ALICE PRC is quite hard. I have to be aware 24 hours per day of the state of all the subsystems, of all the issues and requests and handle the necessary steps for successful problem solving. I have to attend and chair lots of meetings (at least two a day, weekend included). The good part of it is that I have learned a lot about the experiment and the accelerator and, most important of all, I am part of a team whose enthusiasm is the best replacement for the lost sleep!