Architecture and facilities JINR Tier2 2018

2018 :

Computing Resources (CE)

  • Interactive cluster: lxpub [01-05] .jinr.ru
  • User interface lxui [01-04] .jinr.ru (gateway for external connections)
  • Computing farms:

–general purpose farm,
–farm for LHC experiments,
–cluster for NICA, MPD, BM@N (lxmpd-ui.jinr.ru)

Total: 
248 compute nodes (WNs),
4128 cores (cores / slots),
55489 HS06.

Storage Systems (SE)

dCache: 
disk servers  9.7 PB
tape robots: IBM TS3500(1.6PB) + IBM T45500(1.6PB) 

EOS: 7.198 PB

CVMFS: 2 machines: 2x70 TB h / w RAID1 (VOs: NICA (MPD, B@MN, SPD).

Software

 CentOS Scientific Linux release 7.9

 GCC: gcc (GCC) 4.4.7 
 
C ++: g ++ (GCC) 4.4.7 
 
FC: GNU Fortran (GCC) 4.4.7  * FLAGS: -O2 -pthread -fPIC -m32

BATCH: SLURM with adaptation to kerberos and AFS

dCache-5.2

Enstore 6.3 for tape robots

SPECall_cpp2006 with 32-bit binaries

SPEC2006 version 1.1

 WLCG

FairSoft 

FairRoot 

MPDroot 

 

CICC data-processing farm is composed of 240 64-bit (2 x CPU Xeon, 4-14 cores per CPU, E54XX, X65XX, X56XX, ES-26XX v3 / 4, 2-4GB RAM per core) models SuperMicro Blade, SuperMicroTwin2, Dell FX.

As soon as 4-14-core processors on one chip are actually independent – we have in total: 4128 cores

Five 64-bit PCs with an interactive user access are provided for own software development and for other tasks.

CICC compises several servers for JINR users and services: batch, WWW, mySQL DB and Oracle; e-mail; DNS , Nagios monitoring and other.
These servers operate mainly with 64-bit Xeon and Opteron on board.

Software :

OS: Scientific Linux release 6 x86_64.
BATCH: Torque 4.2.10 (home made)
Maui 3.3.2 (home made)
CMS Phedex
ALICE VObox dCache-3.2
EOS aquamarine

dCache is the main software and harware system used for big data storage in JINR CICC. We support the following dCache settings:

1st disk for 2 virtual organizations LHC CMS and ATLAS (Typically Supermicro and DELL):

24 disk servers: 2 x CPU 24GB RAM, 24 SATA h / w RAID6, 43-63TB
4 disk servers: 2 x CPU (Xeon E5-2660 v3 @ 2.60GHz); 128GB of RAM; 76TB ZFS (16x6000GB NL SAS); 2x10G

Total space: 2070ТB

2 head node machines: 2 x CPU (Xeon E5-2683 v3 @ 2.00GHz); 128GB of RAM; 4x1000GB SAS h / w RAID10; 2x10G.7
KVM (Kernel-based Virtual Machine) for access protocols support.

2nd disk for ALICE, EOS:

12 disk servers: 2 x CPU 24GB RAM, 24 SATA h / w RAID6, 10-63TB12 disk servers: 2 x CPU 24GB RAM, 24 SATA h / w RAID6, 10-63TB
2 disk servers: 2 x CPU (Xeon E5-2660 v3 @ 2.60GHz); 128GB of RAM; 76TB ZFS (16x6000GB)

Total space: 712GB.

3rd disk for EGI Vos & local users:

12 disk servers Core2 Duo CPU E8400, 4GB RAM, 12 x SATA h / w RAID6, 8TB
1 disk server 2 x CPU 24GB RAM, 24 SATA h / w RAID6, 63TB

Total space: 147TB

Storage Software:

dCache-2.16
Enstore 4.2.2 for tape robot
EOS aquamarine
XROOTD 3
CMS Phedex

The total dCache and XROOTD storage systems capacity is ~1.4PB (?)

AFS (Andrew File System)

6 AFS servers are installed at the JINR CICC.

The total space of AFS at JINR is ~ 6TB

The network infrastructure.

To improve the operation of the local CICC network and to achieve the required parameters for accessing data and files, several 1Gb connections (10Gbps) are aggregated (TRUNK) into a single virtual channel of increased bandwidth (LHCONE & GEANT) .