The Large Hadron Collider (LHC), built at
CERN near Geneva, is the largest scientific instrument
on the planet. When it begins full operation in 2010, it will produce
roughly 15 Petabytes (15 million Gigabytes) of data annually, which
thousands of scientists around the world will access and analyse. Grid
Computing is a mandatory ingredient to build and maintain a data storage
and analysis infrastructure for the entire high energy physics community
that will use the LHC.
The data from the LHC experiments will be distributed around the globe,
according to a four -tiered model. A primary backup will be recorded on
tape at CERN, the "Tier-0" centre of LCG. After initial processing,
this data will be distributed to a series of Tier-1 centres, large
computer centres with sufficient storage capacity for a large fraction
of the data, and with round-the-clock support for the Grid.
The Tier-1 centres will make data available to Tier-2 centres, each
consisting of one or several collaborating computing facilities, which
can store sufficient data and provide adequate computing power for
specific analysis tasks. Individual scientists will access these
facilities through Tier-3 computing resources, which can consist of
local clusters in a University Department or even individual PCs, and
which may be allocated to LCG on a regular basis.
Our group is strongly involved in the support of the commissioning and
operation of Tier-1 and Tier-2 centres
(GridKa, LRZ-LMU,
ATLAS-DE
cloud ).
In addition, we participate in the development of tools for grid
computing (GANGA,
Panda,
DQ2).
We also work on tools which facilitate grid computing for end-users.
One example is distributed analysis: a typical data analysis at LHC
involves execution and control of 100 or 1000 physics analysis jobs which
must be created and submitted to the Grid resources. The execution needs
to be monitored and the results collected. Another very challenging problem
is interactive Grid computing. The user has the possibility to execute and
control interactively in the order of 1000 programs which run simultaneously
on a worldwide distributed Grid system. This opens up a new
domain of computing problems which have to be managed and controlled by
special software services
(HammerCloud,
GangaRobot,
Proof).
More Informations can be found at the pages of the LCG Computing Grid
Project and the ATLAS Computing Wiki.
|