Tier2 (JINR-LCG2) farm configuration 2022

Computing Resources  ( CE):

  • Interactive cluster: lxpub [01-05] .jinr.ru
  • User interface lxui [01-04] .jinr.ru (gateway for external connections)
  • Computing farms:

–general purpose farm,

–farm for LHC experiments,

–cluster for NICA, MPD, BM@N (lxmpd-ui.jinr.ru)

2022Q1: 7700 cores

              121076.99 HEP-SPEC06  a total performance

               30269.25 HEP-kSI2k     performance per core]

 

 Storage (SE):

8 old disk servers  were removed  , 4 new DELL R740 300TB each have been added . SE disk  has 4 servers; in EOS – 28.  As a result, we have at present:

  • EOS:  Total: 16582.35 TB
  • AFS: ~ 12.5 TB (user home directories, workspaces)
  • CVMFS:   capacity is 140TB  2 machines: 2×70 TB h / w RAID1  (one was 9.5 GB ) (VOs: NICA (MPD, B @ MN, SPD), dstau, er, jjnano, juno, baikalgvd).
  • dCache (for CMS, ATLAS only )

SE disks = 11021.512PB /11021507.34TB

tape robot : 3003TB 

tape libraries, allocated capacity for T1CMS logic library   T1CMS: 51.5PB, IBM TS3500(11.5PB) + IBM T4500(40PB)

 

Software

2022Q1:

CentOS Scientific Linux release 7.9 G

CC: gcc (GCC) 4.4.7 C ++:

g ++ (GCC) 4.4.7

FC: GNU Fortran (GCC) 4.4.7 * FLAGS: -O2 -pthread -fPIC -m32 BATCH:

SLURM with adaptation to kerberos and AFS

dCache-5.2 

Enstore 6.3 for tape robots SPECall_cpp2006 with 32-bit binaries SPEC2006 version 1.1

WLCG

FTS

FairSoft

FairRoot

MPDroot