by Katarina Anthony-Kittelsen. Published: 11 November 2011

In the early hours of Sunday 6 November 2011, the LHC provided the first lead ion collisions of the year. ALICE recorded several thousands of collision events – an accomplishment that required a full year of preparation. ALICE Matters speaks with the teams that made this happen.

ALICE was optimised principally for heavy ions – and with heavy ions comes heavy data fills. The typical ALICE event display for a proton-proton collision contains around 100 particle tracks, but during a lead-lead run the number of tracks in an event jumps to around 10,000. But how are the collision events selected? How is all this new data being stored? “To accommodate the increase in data, we made two major changes to the ALICE Data Acquisition (DAQ) set-up,” explains Pierre Vande Vyvre, DAQ Project Leader. “The first was to create the environment required for the increased role and performance foreseen for the HLT during the heavy ions run of 2010.”

ALICE

Event display from a lead-lead collision recorded by ALICE (6.11.2011)

To accomplish this, the team began increasing the buffering capacity of the computers on-site at ALICE. These units hold data coming from the detector while awaiting a decision from the High Level Trigger (HLT), which decides what events to keep for analysis. “We began by increasing the physical memory of the computers available,” continues Pierre. “The combined memory of these computers now allows us to store detector data for 25 seconds – compared to the 10 seconds of the previous set-up – whilst the HLT makes its decision.” The output bandwidth available to the HLT has also been increased from 0.5 to 5 GB/s.

The other big change to DAQ was the increase of the local storage capacity at Point 2. “We had previously been able to store 180 TB of data on-site; this has now been doubled to 360 TB,” says Pierre. “This increase gives ALICE the ability to store data independently for 20 to 100 hours. For such a data-heavy run, this was essential to ensure that not a single set of data was lost.”

While this improvement in the DAQ system has given the collaboration the edge for data storage, there is still the matter of selecting what data to keep. “The Time Projection Chamber (TPC) gathers 16 GB/s of data,” says Thorsten Kollegger, a member of the ALICE HLT team. “This is 4 times greater than our improved rate of data acquisition.”

So how is this amount of data reduced without limiting which events are selected? “Instead of choosing one event instead of another, we only store a pre-calculated, already processed set of information needed to reconstruct the events. We discard the raw data,” explains Thorsten. “By doing this, we reduce the amount of data per event going to storage and can thus keep more interesting events for physics analysis. To do this, the rate capability of the HLT computing cluster has been increased by adding further Graphics Processing Units (GPUs) to the system. The output system to DAQ has also been reworked to have the required higher bandwidth.”

From data storage to data usage: technicians, engineers and physicists working at ALICE Offline also had a lot of work on their plate. “We had to improve the performance of the analysis software, as well as the tools used to process the data,” says Alina Grigoras, a member of the ALICE offline team. “With double the amount of data, we have doubled the amount of work.” Adjustments to ALICE Offline code have been continuously implemented over the year, and reached completion just in time for the November heavy-ion run.

If all goes as expected, the November heavy-ion run should be a successful time for the ALICE Offline, DAQ, and HLT teams. After lengthy preparations, the teams must still keep up for another 4 weeks before being able to sit back and enjoy the fruits of their labour. “But of course, this run will help us improve our set-up for the next run,” concludes Alina. “There is always room for improvement; it’s just the nature of what we do here at ALICE.”