Tier2 (JINR-LCG2) 2021 farm configuration

Tier2 (JINR-LCG2) 2021 farm configuration:

Sum 2021Q3:

Computing Resources  ( CE):

  • Interactive cluster: lxpub [01-05] .jinr.ru
  • User interface lxui [01-04] .jinr.ru (gateway for external connections)
  • Computing farms:

–general purpose farm,

–farm for LHC experiments,

–cluster for NICA, MPD, BM@N (lxmpd-ui.jinr.ru)

Total:
7700 cores

121076.99 HEP-SPEC06  a total performance

30269.25 HEP-kSI2k     performance per core


 Storage (SE):

8 old disk servers  were removed  , 4 new DELL R740 300TB each have been added  . SE disk  has 4 servers; in EOS – 28.  As a result, we have at present:

 

EOS:  Total: 16582.35 TB

AFS: ~ 12.5 TB (user home directories, workspaces)

CVMFS: 2 machines: 2×70 TB h / w RAID1 (VOs: NICA (MPD, B @ MN, SPD), dstau, er, jjnano, juno, baikalgvd).

dCache (for CMS, ATLAS only )

SE disks = 11021.512PB /11021507.34TB

tape robot : 3003TB 

tape libraries, allocated capacity for T1CMS logic library   T1CMS: 51.5PB, IBM TS3500(11.5PB) + IBM T4500(40PB)

Software

CentOS Scientific Linux release 7.9

GCC: gcc (GCC) 4.4.7 C ++: g ++ (GCC) 4.4.7

FC: GNU Fortran (GCC) 4.4.7 * FLAGS: -O2 -pthread -fPIC -m32

BATCH: SLURM with adaptation to kerberos and AFS

dCache-5.2  Enstore 6.3 for tape robots SPECall_cpp2006 with 32-bit binaries SPEC2006 version 1.1

WLCG

FTS

FairSoft

FairRoot

MPDroot

 


Sum 2021Q1:

dCache :

дисковые сервера 10.68PB

ленточный робот : 3003TB 

ленточные библиотеки, распределённая ёмкость для логической библиотеки T1CMS: 51.5PB, IBM TS3500(11.5PB) + IBM T4500(40PB)

EOS:  7.198 PB CVMFS : 2 машины : 2×70 TB h/w RAID1 (VOs: NICA (MPD, B@MN, SPD).

AFS: ~ 12.5 TB (user home directories, workspaces)

CVMFS: 2 machines: 2×70 TB h / w RAID1 (VOs: NICA (MPD, B @ MN, SPD), dstau, er, jjnano, juno, baikalgvd).


 

2021 (january):

AFS: ~ 12.5 TB (user home directories, workspaces)

EOS: 7.198 PB

CVMFS: 2 machines: 2×70 TB h / w RAID1 (VOs: NICA (MPD, B @ MN, SPD), dstau, er, jjnano, juno, baikalgvd).

dCache:

–disk servers 9.7 PB;

–tape robots: 3003TB;

–tape lirary (T1CMS) IBM TS3500 (1.16PB) + IBM T45500 (1.6PB) (VOs ATLAS, CMS)

xrootd    40GB

Computing Resources (CE):

  • Interactive cluster: lxpub [01-05] .jinr.ru
  • User interface lxui [01-04] .jinr.ru (gateway for external connections)
  • Computing farms:

–general purpose farm,

–farm for LHC experiments,

–cluster for NICA, MPD, BM@N (lxmpd-ui.jinr.ru)

Total: 248 compute nodes (WNs), 4128 cores (cores / slots), 55489 HS06.

Auxiliary servers

  • e-mail
  • home WWW pages
  • databases MySQL , Postgres,  Oracle
  • DNS
  • мониторинг Nagios
  • ftp

These services usually function “transparently” from a user viewpoint, i.e. need no additional settings. One part of them support LIT and JINR services (WWW, ftp, MySQL), while the other is mainly for JINR users servicing (mail).

WLCG.

To service the WLCG site at JINR (the site is a separate cluster in the distributed environment of the WLCG) and other international collaborations, 22 servers with the gLite system (WLCG intermediate level software) are installed. In addition to the support functions of the JINR-LCG2 site itself, some of the servers implement important services and support functions for the Russian segment of the WLCG project.

Virtual organizations has been added to the JINR Tier2 :
ILC (WLCG http://www.linearcollider.org/),

MPD (JINR NICA), BM@N (JINR NICA), COMPASS (WLCG CERN).

For VO JUNO, several services will be installed and configured at Tier2:

CE JUNO – will be allowed to run tasks at the JINR Tier2 farm
VOMS server, a mirror of the main VOMS in China;
CVMSF stratum-1 server, to support access to the JUNO software repositories in China.

Network and telecommunications.

The JINR network infrastructure includes the following components:

  • external optical telecommunication channel of data transmission JINR-Moscow;
  • the backbone of the JINR local computer network;
  • local computer networks of the institute’s divisions.

The JINR network has direct connections to a number of scientific, educational and public networks at speeds:

  • with LHCOPN network – 2×100 Gbps;
  • with LHCONE network – 2×100 Gbps;
  • with GEANT network – 10 Gbps;
  • with RBnet network – 10 Gbps;
  • with networks of Moscow and St. Petersburg – 10 Gbps;
  • from the Internet – 10 Mbps.

Interconnection with urban networks is organized on the basis of the DBN-IX traffic exchange node. The local network of the institute has direct connections at speeds:

  • with LanPolis network (Net by Net) – 100 Mbps;
  • with the Contact network – 1 Gbps;
  • with Telecom-MPK network – 1 Gbps.
  • with the TELESET network – 1 Gbps.

 

In 2020, the following works were performed to change the configuration of Tir2 (JINR-LCG2):

1) Changed batch server to Slurm type. Works and is used by CE.

2) Changed Computing Element (CE) for WLCG type – ARC6 (internal slurm queue). (CREAM-CE is no longer supported by the central task launcher).

3) the farm is transferred to Scientific Linux release 7 system.

4) Added new tape robot (TS4500).

5) PhEDEx is replaced by RUCIO.

6) New servers are connected in EOS.

7) Increased disk space of the interactive cluster /scrc (2.2TB)

 Software

CentOS Scientific Linux release 7.9

GCC: gcc (GCC) 4.4.7 C ++: g ++ (GCC) 4.4.7

FC: GNU Fortran (GCC) 4.4.7 * FLAGS: -O2 -pthread -fPIC -m32

BATCH: SLURM with adaptation to kerberos and AFS

dCache-5.2 Enstore 6.3 for tape robots SPECall_cpp2006 with 32-bit binaries SPEC2006 version 1.1

WLCG

FTS

FairSoft

FairRoot

MPDroot

 

Structure CICC Cluster 2020

Structure CICC Cluster 2019