Storage Resource

CICC   users are provided with the following data storage systems and software:

2023:

8 old disk servers were removed, 4 new DELL R740 of 300 TB each were added. SE disk has 4 servers; in EOS – 28. Added: 2 servers (Qtech QSRV-462402_3) = 681.1 TB.  EOS Added: 20 servers (Qtech QSRV-462402_4)

As a result, we currently have:

  • EOS:  2023Q4=23328.10 TB ;  2023Q3=22203.82 TB; 2023Q2=21829.01 TB; 2023Q1=16582.35 TB; 

ALICE @ EOS:  1653.24 TB;

EOSCTA: 11.5 PB  2023Q3

  • dCache:

2023Q2:   3753.694341 TB 

CMS=1903.269531TB;  busy 422.708210 / free 1480.230266

ATLAS=1850.424810TB;  busy 1087.250671 / free 715.476494

2023Q4: 3933.47 TB  ( CMS 1994.22TB  ;  Atlas  1939.25 TB)

Local & EGI @ dcache2  199.74 TB

tape robot : 3003TB 

  • CVMFS:  capacity is 140 TB   (2 squid servers cache CVMFS ); 1 server stratum-0, 2 servers stratum-1,  4 servers  squid.

VOs: NICA (MPD, B @ MN, SPD), dstau, er, jjnano, juno, baikalgvd….  

  • AFS:  ~ 12.5 TB (user home directories, workspaces)

===================================

Archive:

CICC 2023

 2022 CICC   configuration

2021 CICC   configuration

2020 CICC  configuration

2019 CICC 

2018 CICC architecture and facilities

 

In 2020, the following works were performed to change the configuration of Tir2 (JINR-LCG2):

1) Changed batch server to Slurm type. Works and is used by CE.

2) Changed Computing Element (CE) for WLCG type – ARC6 (internal slurm queue). (CREAM-CE is no longer supported by the central task launcher).

3) the farm is transferred to Scientific Linux release 7 system.

4) Added new tape robot (TS4500).

5) PhEDEx is replaced by RUCIO.

6) New servers are connected in EOS.

7) User interface lxui [01-04] .jinr.ru as a gateway for external connections

8) Increased disk space of the interactive cluster /scrc (2.2TB)