CICC farm configuration.

The JINR CICC is logically constructed as a single information and computing resource for all JINR projects. All computational and storage resources are served by a single basic software that allows using CICC resources both in international distributed computing projects (wLCG, EGEE, PANDA-GRID, CBM) and locally by JINR users.

The priorities are the tasks of the NICA project (BM@N, MPD, SPD), the neutrino program (NOvA, Daya Bay, JUNO, etc.), the tasks of processing data from experiments at LHC (ATLAS, ALICE, CMS), FAIR (CBM, PANDA) and other large-scale experiments, as well as support for users of JINR Laboratories and the participating countries.

   The software settings of the complex are optimized to maximize the use of computational resources and support the most versatile and secure methods of access to data stores. The distribution and accounting of computing resources is supported on the basis of the batch processing system  and the resource scheduler.

   Data access is provided by software systems EOS, XROOTD, dCache (only for ATLAS, CMS), EOS_CTA  access to common software and user home directories is provided by AFS, cvmfs, gitlab. The systems kerberos5 and ldap  are used to register and recognize JINR local users.

    MLIT has developed and is actively uses complex system of “promotion” of operating systems for mass remote installation of basic software on new computers.  The system is based on standard Linux OS tools and is supplemented with elements from Warewulf  software. The system allows you to automate the massive installation of software on new machines and the mass replacement of operating system versions. Introduces automatic configuration of various services through Puppet, Ansible for configuring WLCG services.

Computing Resources  (CE):

  • Interactive cluster: lxpub [01-05] .jinr.ru
  • User interface lxui [01-04] .jinr.ru (gateway for external connections)
  • Computing farm

Storage (SE):

8 old disk servers  were removed  , 4 new DELL R740 300TB each have been added . SE disk  has 4 servers; in EOS – 28.  As a result, we have at present:

  • EOS
  • EOSCTA
  • AFS: ~
  • CVMFS
  • dCache (for CMS, ATLAS only )
  • tape robot
  • tape libraries
  • Tapes@Enstor

Auxiliary servers

  • e-mail
  • home WWW pages
  • databases MySQL , Postgres,  Oracle
  • DNS
  • мониторинг Nagios
  • ftp

These services usually function “transparently” from a user viewpoint, i.e. need no additional settings. One part of them support LIT and JINR services (WWW, ftp, MySQL), while the other is mainly for JINR users servicing (mail).

WLCG.

To service the WLCG site at JINR (the site is a separate cluster in the distributed environment of the WLCG) and other international collaborations, 22 servers with the gLite system (WLCG intermediate level software) are installed. In addition to the support functions of the JINR-LCG2 site itself, some of the servers implement important services and support functions for the Russian segment of the WLCG project.

 


Archive:

CICC 2023

 2022 CICC   configuration

2021 CICC   configuration

2020 CICC  configuration

2019 CICC 

2018 CICC architecture and facilities

 

In 2020, the following works were performed to change the configuration of Tir2 (JINR-LCG2):

1) Changed batch server to Slurm type. Works and is used by CE.

2) Changed Computing Element (CE) for WLCG type – ARC6 (internal slurm queue). (CREAM-CE is no longer supported by the central task launcher).

3) the farm is transferred to Scientific Linux release 7 system.

4) Added new tape robot (TS4500).

5) PhEDEx is replaced by RUCIO.

6) New servers are connected in EOS.

7) User interface lxui [01-04] .jinr.ru as a gateway for external connections

8) Increased disk space of the interactive cluster /scrc (2.2TB)