by Ian Randall. Published: 31 January 2011

The Data Acquisition system at ALICE has received major upgrades, both in software and hardware, over the last few weeks. With a new operating system, more machines and extra memory at her disposal, ALICE’s data acquisition capacity has been significantly improved – enabling a more efficient collection of data from future collisions in the LHC.

Ulrich Fuchs

Some of the many PCs which form ALICE's Data Acquisition system - which recieved an upgrade this month

ALICE’s Data Acquisition (DAQ) currently consists of 250 PCs which read the data from the electronics and the various detector components, a farm of 90 machines which concentrate and format the data and a further 30 computers which are capable of moving this data across to CERN’s main computing centre, 12km away. An additional 20 machines also attend to general services which are required to run the system.

The hardware upgrades to the DAQ were concerned with the detector readout machines. Here, the 36 computers connected to the Time Projection Chamber (TPC) were replaced with 72 newer models – doubling their number. The 36 preexisting machines were then redistributed amongst the readout of the other detectors to improve their efficiency.

“We doubled the number of readout PC’s,” says Ulrich Fuchs, the DAQ service manager at ALICE, adding: “This gives us more headroom for data taking and buffering.”

The installation is not a trivial matter – the work, which finished last Friday, took one and a half weeks in total – and involved the unplugging, reconnecting and documenting of thousands of the cables which connect ALICE’s data acquisition computer systems. All the upgrades were done with G. Simonetti and A. Grigore.

Further to this, software upgrades are also being implemented – including a shift from a 32-bit to a more modern 64-bit operating system across the board – from the readout machines themselves, even to the computers the shifters use in ALICE’s control room. With this, not only must the operating system be switched, but also the hardware drivers and all of the data acquisition software.

“This change is what concerns the ALICE collaboration the most – because the detector teams have to revise their calibration code and monitoring procedures to be up-to-date,” says Fuchs. “There is still some work to be done, in close collaboration with the detector teams, to have the whole complex chain working smoothly.”

The new operating system has the benefit of increasing the amount of memory available to ALICE. “The 32-bit operating system is limited: it can only see 3.6 GB of memory in the machine, regardless of how much you plug in. Now, with a 64-bit system, we can finally see all the memory that is in the machine,” Fuchs explains. “We had machines that had 16 gigabytes installed, but we could only see the first 3.6 - more than 10 gigabytes of memory was unusable. It was really necessary to do this step: this will give us more freedom.”

In the future, it is hoped to increase the temporary storage available to ALICE. At the moment, ALICE has 150 terabytes: enough to store 4-5 hours of concentrated data on disc in the event that it were to prove impossible to send the information to CERN’s computer center. Originally, it was intended that ALICE should be able to run autonomously for around 10-12 hours in such a case – but with ALICE recording more data than anticipated, the DAQ plans to increase this storage to 400 terabytes to provide more leeway.

recent: 
yes

Comments

Fascinating stuff

What operating system do you use? I had exactly the same problem with my 32-bit OS laptop!

What kind of infrastructure carries the data from the detectors to where the data is analysed? That must have cost a fortune to set up. If you can, please release a specifications sheet for the machines - am just curious.
Sarai

Tech Info

We switched to SLC5.5/64 (based on Redhat Enterprise Linux 5).
The data is read from the detector electroics through DDL links (serial links over fiber, http://cern.ch/ddl) into the readout PCs (supermicro X7DBE, 2.2GHz, 8GB), that send the data over GigEthernet to the event building machines (HP Proliant DL160 G6, Xeon E5520, 16GB DDR3). From there it's 4x10GigE of LR fiber to the computing senter (tape, Grid).
U.Fuchs.