2024:
EOS: 2023Q4=23328.10 TB; 2023Q3=22203.82 TB; 2023Q2=21829.01 TB; 2023Q1=16582.35 TB; ALICE @ EOS Total: 1653.24 TB EOSCTA: 2023Q3=11PB AFS: ~12.5 TB (user home directories, workspaces) CVMFS: 140TB; 3 machines: 1 stratum0, 2 stratum1, 2 squid servers cache CVMFS; (VOs: NICA (MPD, B @ MN, SPD), dstau, er, jjnano, juno, baikalgvd....). dCache 2023Q4 : for ATLAS:1939.25 TB for CMS: 1994.36 TB Local & EGI @ dcache2 Total: 199.74
2023:
Удалены 8 старых дисковых серверов,
добавлены 4 новых DELL R740 по 300 ТБ каждый.
На диске SE 4 сервера; в EOS – 28.
Добавлено: 2 сервера (Qtech QSRV-462402_3) = 681,1 ТБ.
В EOS добавлено: 20 серверов (Qtech QSRV-462402_4)
В итоге на данный момент имеем:
EOS: 2023Q4=23328.10 TB ; 2023Q3=22203.82 TB; 2023Q2=21829.01 TB; 2023Q1=16582.35 TB; EOSCTA: 11.5 PB; ALICE EOS : 1653.24 TB CVMFS: capacity is 140TB (2 squid servers cache CVMFS ) 1 сервер stratum-0, 2 сервера stratum-1, 4 сервера squid. VOs: NICA (MPD, B @ MN, SPD), dstau, er, jjnano, juno, baikalgvd.... AFS: ~ 12.5 TB (user home directories, workspaces) dCache (только для CMS, ATLAS ): 2023Q2: CMS: 1903.269531TB; занято 422.708210 / свободно 1480.230266 ATLAS: 1850.424810TB; занято 1087.250671 / свободно 715.476494 Local & EGI @ dcache2 199.74 TB 2023Q4: CMS 1994.22TB Atlas 1939.25 TB Local & EGI @ dcache2: 199.74 TB tape robot : 3003TB
2020:
In 2020, the following works were performed to change the configuration of Tir2 (JINR-LCG2):
1) Changed batch server to Slurm type. Works and is used by CE.
2) Changed Computing Element (CE) for WLCG type – ARC6 (internal slurm queue). (CREAM-CE is no longer supported by the central task launcher).
3) the farm is transferred to Scientific Linux release 7 system.
4) Added new tape robot (TS4500).
5) PhEDEx is replaced by RUCIO.
6) New servers are connected in EOS.
7) User interface lxui [01-04] .jinr.ru as a gateway for external connections
8) Increased disk space of the interactive cluster /scrc (2.2TB)
dCache:
-disk servers 9.7 PB
-tape robots: IBM TS3500(1.6PB) + IBM T45500(1.6PB)
EOS: 7.198 PB
CVMFS: 2 machines: 2×70 TB h / w RAID1 (VOs: NICA (MPD, B@MN, SPD).
AFS ~12.5 TB
NFS ~ 11 TB ; 5 NFS servers
xrootd 40GB ;
Clusters:
interactive cluster (lxpub[01-05].jinr.ru)
several computing farms :
— common farm:
— LHC experiments dedicated farm
— Parallel processing farm
— NICA, MPD, BM@N cluster
Total:
248 computing nodes (WNs) ,
4128 cores/slots,
55.489 kHS06