Archive SE 2020,2023, 2024

2024:

EOS:  2023Q3 23350 TB ;
2023Q2=21828.99 TB ;
2023Q1=16582.35 TB;
2021/22=15.598 TB.

ALICE @ EOS 1653.22 TB

EOSCTA: 2023Q3=11000 TB

AFS: ~12.5 TB (user home directories, workspaces)

CVMFS: 140 TB (3 machines : 1 stratum0, 2 stratum1, 2 squid servers cache  CVMFS ( VOs: NICA (MPD, B@MN, SPD), dstau, er, jjnano, juno, baikalgvd ..)).

dCache 2023Q2-Q4: 3933,43 (ATLAS @ dcache 1939.25 TB CMS @ dcache 1994.18 TB)

Local & EGI @ dcache2 199.74 TB

————-

Total: EOS_(23350 TB + 1653.22 TB) + EOSCTA_(10000TB) + 12.5TB_(AFS) + 140TB_(CVMFS) + dCache_(1939.25 TB + 1994.18 TB + 199.74 TB) = 39288.89 TB

FYI:

dCache:   Added: 2 servers ( Qtech QSRV-462402_3) = ~0.68 PB

EOS:      Added: 20 servers (Qtech QSRV-462402_4) . Sum(capacity): 22.353 PB

2023:

8 old disk servers were removed, 4 new DELL R740 of 300 TB each were added. SE disk has 4 servers; in EOS – 28. Added: 2 servers (Qtech QSRV-462402_3) = 681.1 TB. EOS Added: 20 servers (Qtech QSRV-462402_4)

As a result, we currently have:

EOS: 2023Q4=23328.10 TB ; 2023Q3=22203.82 TB; 2023Q2=21829.01 TB; 2023Q1=16582.35 TB;

ALICE EOS: 1653.24 TB;

EOSCTA: 11.5 PB 2023Q3

dCache:

2023Q2: 3753.694341 TB

CMS=1903.269531TB; busy 422.708210 / free 1480.230266 ATLAS=1850.424810TB; busy 1087.250671 / free 715.476494

2023Q4: 3933.47 TB

CMS 1994.22TB ;

Atlas 1939.25 TB)

Local & EGI @ dcache2 199.74 TB

tape robot: 3003TB

CVMFS: capacity is 140 TB (2 squid servers cache CVMFS ); 1 server stratum-0, 2 servers stratum-1, 4 servers squid. VOs: NICA (MPD, B @ MN, SPD), dstau, er, jjnano, juno, baikalgvd….

AFS: ~ 12.5 TB (user home directories, workspaces)

 

2020:

In 2020, the following works were performed to change the configuration of Tir2 (JINR-LCG2):

1) Changed batch server to Slurm type. Works and is used by CE.

2) Changed Computing Element (CE) for WLCG type – ARC6 (internal slurm queue). (CREAM-CE is no longer supported by the central task launcher).

3) the farm is transferred to Scientific Linux release 7 system.

4) Added new tape robot (TS4500).

5) PhEDEx is replaced by RUCIO.

6) New servers are connected in EOS.

7) User interface lxui [01-04] .jinr.ru as a gateway for external connections

8) Increased disk space of the interactive cluster /scrc (2.2TB)

 

dCache:

-disk servers 9.7 PB

-tape robots: IBM TS3500(1.6PB) + IBM T45500(1.6PB)

EOS:  7.198 PB

CVMFS: 2 machines: 2×70 TB h / w RAID1 (VOs: NICA (MPD, B@MN, SPD).

AFS      ~12.5 TB

NFS       ~ 11 TB ; 5 NFS servers 

xrootd    40GB  ;

Clusters:

interactive cluster (lxpub[01-05].jinr.ru)

several computing farms :

— common farm:

— LHC experiments dedicated farm

— Parallel processing farm

— NICA, MPD, BM@N  cluster

Total:

248 computing nodes (WNs) ,

4128 cores/slots,

55.489 kHS06

 

2023  CICC  configuratuon

 2022 CICC   configuration

2021 CICC   configuration

2020 CICC  configuration

2019 CICC 

2018 CICC architecture and facilities