CICC architecture and facilities 2018

2018 :  CICC data-processing farm is composed of 240 64-bit (2 x CPU Xeon, 4-14 cores per CPU, E54XX, X65XX, X56XX, ES-26XX v3 / 4, 2-4GB RAM per core) models SuperMicro Blade, SuperMicroTwin2, Dell FX.

As soon as 4-14-core processors on one chip are actually independent – we have in total: 3640 cores, 46866.52 HEP-SPEC06

Five 64-bit PCs with an interactive user access are provided for own software development and for other tasks.

CICC compises several servers for JINR users and services: batch, WWW, mySQL DB and Oracle; e-mail; DNS , Nagios monitoring and other.
These servers operate mainly with 64-bit Xeon and Opteron on board.

 

dCache

is the main software and harware system used for big data storage in JINR CICC. We support the following dCache settings:

1st disk for 2 virtual organizations LHC CMS and ATLAS (Typically Supermicro and DELL):

24 disk servers: 2 x CPU 24GB RAM, 24 SATA h / w RAID6, 43-63TB
4 disk servers: 2 x CPU (Xeon E5-2660 v3 @ 2.60GHz); 128GB of RAM; 76TB ZFS (16x6000GB NL SAS); 2x10G

Total space: 2070TB

2 head node machines: 2 x CPU (Xeon E5-2683 v3 @ 2.00GHz); 128GB of RAM; 4x1000GB SAS h / w RAID10; 2x10G.7
KVM (Kernel-based Virtual Machine) for access protocols support.

2nd disk for ALICE, EOS:712GB.

12 disk servers: 2 x CPU 24GB RAM, 24 SATA h / w RAID6, 10-63TB12 disk servers: 2 x CPU 24GB RAM, 24 SATA h / w RAID6, 10-63TB
2 disk servers: 2 x CPU (Xeon E5-2660 v3 @ 2.60GHz); 128GB of RAM; 76TB ZFS (16x6000GB)

Total space: 712GB.

3rd disk for EGI Vos & local users:

12 disk servers Core2 Duo CPU E8400, 4GB RAM, 12 x SATA h / w RAID6, 8TB
1 disk server 2 x CPU 24GB RAM, 24 SATA h / w RAID6, 63TB

Total space: 147TB

 

AFS (Andrew File System)

6 AFS servers are installed at the JINR CICC.

The total space of AFS at JINR is ~ 6TB

Software :

OS: Scientific Linux release 6 x86_64.
BATCH: Torque 4.2.10 (home made)
Maui 3.3.2 (home made)
CMS Phedex
ALICE VObox
UMD-4
dCache-3.2
EOS aquamarine
cvmfs
openafs

WLCG
XROOTD 3

 

The network infrastructure.

To improve the operation of the local CICC network and to achieve the required parameters for accessing data and files, several 1Gb connections (10Gbps) are aggregated (TRUNK) into a single virtual channel of increased bandwidth (LHCONE & GEANT) .