by Panos Charitos. Published: 17 May 2013


The miniaturization of transistors on chips continues like predicted by Moore's law, while computer hardware starts to face scaling issues, the so-called performance 'walls'. Probably, the best known is the 'power wall', which limits clock frequencies and probably the best way of increasing processor performance is based on integrating many cores in the same chip. In addition, vector units become again standard. In order to avoid wasting the available resources a major effort is needed to re-engineer existing HEP software for the future efficient exploitation of resources.

This is a major point of concern at the LHC as increasing trigger rates and higher event complexity result in ever-increasing demands on both CPU and memory footprint. Things are becoming more challenging as new generations of computers have started exploiting higher levels of parallelism based on multiple CPUs and new CPU micro-architectures. Significant agility will be needed to adapt and even re-design the algorithms and data structures of existing HEP code to fully utilize the available processing power. This will require a thorough knowledge of the concepts behind various parallelization methodologies. It is clear that the planned increase in luminosity at the LHC will further push the need for finer-grain parallelism in data processing applications. 

Multi-core chips are paving the way for more powerful processors.

In order to address some of these issues, a workshop was held by the ALICE Brazilian group in the University of São Paulo. The workshop gave a quick start into parallel software design and consisted of three major parts. During the first part an introduction to the basic tools of software development was given. The second part addressed the design of high-energy physics software and how it is being challenged by modern CPU architectures. Finally, the third part included hands-on sessions in parallel software development. 

One of the aims of this workshop was to help the local group in the first steps towards this goal. As Marcelo Munhoz, one of the organizers of the workshop, says: “For us it was instrumental to create a realistic perspective for a Brazilian participation in the computing part of the ALICE upgrade programme”. The Brazilian group is engaged in several ALICE physics analyses while it also operates a Tier-2 centre of the Worldwide LHC Grid (WLCG).

The server farm in the 1450 m2 main room of the CERN Data Centre (pictured) forms Tier 0, the first point of contact between experimental data from the LHC and the Grid.

The series of lectures on parallel software development aimed to give to post-docs and senior graduate students the tools required for efficient software development. Moreover, concrete advice on existing projects of the group was provided by CERN/SFT researcher Benedikt Hegner, an expert on LHC experiments’ reconstruction and analysis software and parallel software development.

The group is discussing with Pierre Vande Vyvre about the next steps. However, it should be noted that a direct consequence of Benedikt's workshop was the interest of two PhD students in the project. Marcelo adds: “One of them (Caio Prado) is a physicist from our group, that finished his master degree under the supervision of Prof. Alexandre Suaide, and got very interested in a computing project after participating in the workshop. The other one is Nicolas Kassalias, who has a master in engineering and also got interested after participating in the workshop. Nicolas will be supervised by Prof. Sergio Takeo, from the University of São Paulo Engineering School. The fact that we have a physicist and an engineer working together will be a very interesting synergy”. Indeed the work of Nicolas is the result of the strong collaboration between the Brazilian ALICE group and the group from the University's Engineering School.

Other departments at USP were also interested and the lectures were generalized for a wider audience. In the end more than twenty people participated on a regular basis, half of them from the physics department, a quarter of them from the computing and engineering department and the others from various other parts of the university. It has been a very well-received event that already opened new ways for the future collaboration between CERN and São Paulo University. It will hopefully help the team to contribute to the future re-engineering of the ALICE code in a way that will exploit parallelism and make more efficient use of computing resources.

The success of the event was possible due to the efforts of the local hosting group and namely of Prof. Munhoz, Prof. Suaide, Renato Borges and Douglas Vieira. One should also acknowledge the efforts of Benedikt Hegner from the SFT group who travelled to São Paulo and John Harvey, head of the SFT CERN group. The workshop was generously sponsored by the EPLANET project.