Tags:
create new tag
, view all tags

Storage Elements(SE)

2019:

Storage System: dCache

The storage system consists of disk arrays and long-term data storage on tapes and is supported by the dCache-3.2 and Enstore 4.2.2 software. ( hardware : Typically Supermicro and DELL)

  • 1st - Disk Only: 8.3PB ( disk 7.2PB , buffer 1.2.PB )

  • 1 tape robot: IBM TS3500, 3440xLTO-6 data cartridges; 12xLTO-6 tape drives FC8,12xLTO-6 data cartridges, 11 PB.

~400 slots reserved for MPD,BM@N

Storage System: EOS is common to all projects.

This system is designed for storing and accessing large amounts of information, including for distributed collective data generation, storing “raw” data of facilities, and converting and analyzing data. Global access to EOS is provided by means of the WLCG software. Currently, EOS at JINR CICC is used by the experiments NICA, BM@N, MPD, Total space: 4PB

* Storage Software:

  • CMS Phedex
  • dCache-3.2
  • Enstore 4.2.2 for tape robot.
  • BATCH : Torque 4.2.10 (home made)/Maui 3.3.2 (home made)
  • EOS aquamarine
  • WLCG ( 2xCREAM, 4xARGUS, BDII top, BDII site, APEL parsers, APEL publisher, EMI-UI, 220xEMI-WN + gLExec-wn, 4xFTS3, LFC, WMS, L&B, glite-proxyrenewal)



2018:

* Hardware

Storage System dCache:

Typically Supermicro and DELL

  • 1st - Disk Only: 7.2 PB

  • 2nd - support Mass Storage System: 1.1PB.

* 1 tape robot: *

  • IBM TS3500, 3440xLTO-6 data cartridges;

  • 12xLTO-6 tape drives FC8, 9 PB

* Software

  • OS: Scientific Linux release 6 x86_64.
  • BATCH : Torque 4.2.10 (home made)
  • Maui 3.3.2 (home made)
  • CMS Phedex
  • dCache-3.2 * Enstore 4.2.2 for tape robot.



2017:

* Hardware

Storage System dCache:

1st - Disk for ATLAS & CMS: Typically Supermicro and DELL

  • 31 disk servers: 2 x CPU (Xeon E5-2650 @ 2.00GHz); 128GB RAM; 63TB ZFS (24x3000GB NL SAS); 2x10G.
  • 24 disk servers: 2 x CPU (Xeon E5-2660 v3 @ 2.60GHz); 128GB RAM; 76TB ZFS (16x6000GB NL SAS); 2x10G
  • 4 disk servers: 2 x CPU (Xeon E5-2650 v4 @ 2.29GHz) 128GB RAM; 150TB ZFS (24x8000GB NLSAS), 2x10G
Total space: 4.6PB.

  • 3 head node machines: 2 x CPU (Xeon E5-2683 v3 @ 2.00GHz); 128GB RAM; 4x1000GB SAS h/w RAID10; 2x10G.
  • 8 KVM (Kernel-based Virtual Machine) for access protocols support.
2nd - support Mass Storage System:

  • 8 disk servers: 2 x CPU (Xeon X5650 @2.67GHz); 96GB RAM; 63TB h/w RAID6 (24x3000GB SATAIII); 2x10G; Qlogic Dual 8Gb FC.
  • 8 disk servers: 2 x CPU (E5-2640 v4 @ 2.40GHz); 128GB RAM; 70TB ZFS (16x6000GB NLSAS); 2x10G; Qlogic Dual 16Gb FC.
Total disk buffer space: 1PB.

  • 1 tape robot: IBM TS3500, 2000xLTO Ultrium-6 data cartridges; 12xLTO Ultrium-6 tape drives FC8; 11000TB.
  • 3 head node machines: 2 x CPU (Xeon E5-2683 v3 @ 2.00GHz); 128GB RAM; 4x1000GB SAS h/w RAID10; 2x10G.
  • 6 KVM machines for access protocols support
* Software

  • dCache-2.16
  • Enstore 4.2.2 for tape robot.
2016 * Hardware

1st - Disk Only:

30 disk servers: 2 x CPU (Xeon E5-2650 @ 2.00GHz); 128GB RAM; 63TB h/w RAID6 (24x3000GB NL SAS); 2x10G.

12 disk servers: 2 x CPU (Xeon E5-2660 v3 @ 2.60GHz); 128GB RAM; 76TB ZFS (16x6000GB NL SAS); 2x10G.

Total space: 2.8PB

3 head node machines: 2 x CPU (Xeon E5-2683 v3 @ 2.00GHz); 128GB RAM; 4x1000GB SAS h/w RAID10; 2x10G.

8 KVM (Kernel-based Virtual Machine) for access protocols support.

2nd - support Mass Storage System:

8 disk servers: 2 x CPU (Xeon X5650 @2.67GHz); 96GB RAM; 63TB h/w RAID6 (24x3000GB SATAIII); 2x10G; Qlogic Dual 8Gb FC.

Total disk buffer space: 0.5PB.

1 tape robot: IBM TS3500, 2000xLTO Ultrium-6 data cartridges; 12xLTO Ultrium-6 tape drives FC8; 5400TB.

3 head node machines: 2 x CPU (Xeon E5-2683 v3 @ 2.00GHz); 128GB RAM; 4x1000GB SAS h/w RAID10; 2x10G.

6 KVM machines for access protocols support

* Software

dCache-2.10

Enstore 4.2.2 for tape robot.



2014:

* Hardware

2 facilities SE (dCache)

1st - Disk Only:

3 disk servers: 65906GB h/w RAID6 (24x3000GB SATAIII); 2x1GbE; 48GB RAM

5 disk servers: 65906GB h/w RAID6 (24x3000GB SATAIII); 10GbE; 48GB RAM

3 control computes : 2xCPU (Xeon X5650 @ 2.67GHz; 48GB RAM; 500GB SATA-II; 2x1GbE.

8 computers support access protocols. All are on a virtual machine (KVM)


2nd - support Mass Storage System:

2 disk servers: 65906GB h/w RAID6 (24x3000GB SATAIII); 2x1GbE; 48GB RAM
1 tape robot: IBM TS3200, 24xLTO5; 4xUltrium5 FC8; 72TB.
3 control computers : 2xCPU (Xeon X5650 @ 2.67GHz; 48GB RAM; 500GB SATA-II; 2x1GbE.
6 computers support access protocols. All are on a virtual machine (KVM)

* Software

dcache-2.6.31-1 (dcache.org) dcache disk only
dcache-2.2.27 (dcache.org) dcache MSS

-- TWikiAdminUser - 2019-02-28

Topic attachments
I Attachment Action Size Date Who Comment
JPEGjpg 16.jpg manage 92.5 K 2019-02-28 - 13:36 TWikiAdminUser  
Topic revision: r2 - 2019-03-25 - TWikiAdminUser
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback