LHC Computing Grid was a pioneer integration effort, managed to unite computing and storage resources all over the world, thus made them available to experiments on Large Hadron Collider.
During decade of LHC computing, Grid software has learned to effectively utilize different types of computing resources, such as classic computing clusters, clouds and hyper power computers.
And while the resources experiments use are the same, data flow differs
from experiment to experiment.
A crucial part of each experiment computing is a production system,
which describes logic and controls data processing of the experiment.
COMPASS always relied on CERN facilities, and, when CERN, during hardware and software upgrade, started migration to resources, available only via Grid, faced the problem of insufficiency of resources to process data on.
To make COMPASS data processing able to work via Grid, development of the new production system has started.
Key features of modern production system for COMPASS are:
- distributed data processing,
- support of different type of computing resources,
- support of arbitrary amount of computing sites.
Build blocks for the production system are taken from achievements
of LHC experiments, but logic of data processing is COMPASS-specific and unique.
Details of implementation of Grid production system for COMPASS
look in the report:
” COMPASS Grid Production System” , Mr. Artem PETROSYAN, 2017