CICC Services 2020

Computing Service.
  • Computational service CICC presented:

— Interactive cluster
— Computing farms

Computing farms are designed for the work of all users registered on the JINR LIT farm.
Launch a job, defining a specific farm, working node, managing job execution and sending results to the user are performed using the PBS batch software for the batch processing system.
Interactive access to computing farms for the user is prohibited (closed). However, you can use the PBS batch interactive launch mode to debug the job.

 Interactive Cluster.

Five 64-bit PCs with an interactive user access are provided for own software development and for other tasks( interactive farm  lxpubXX.jinr.ru, XX = 01,02,03,04,05) .

  Computing Farms:

— Common farm. PBS batch consists of 248  64-bit computing nodes, 4128  cores/slots,55488.92 HEP-SPEC06 13872.23 HEP-kSI2k

— LHC experiment farm

— Parallel computing farm

— Cluster for NICA, MPD, BM@N (lxmpd-ui.jinr.ru)

 CVMFS (/cvmfs) 

The CVMFS  is used to deploy large software packages of collaborations working in the WLCG. At present, we already have versions of the NICA, BM@N, MPD software (/ cvmfs / nica, / cvmfs / …) and occupy 9.5 GB.

 

Virtual organizations added to JINR Tier2:


ILC (WLCG http://www.linearcollider.org/), MPD (JINR NICA), BM@N (JINR NICA), COMPASS (WLCG CERN).

For the JUNO virtual organization, several services will be installed and configured on Tier2:

CE JUNO – will be allowed to run tasks in the JINR / Tier2 farm
VOMS server, mirror of the main VOMS in China;
CVMSF stratum-1 server, for support, access to JUNO software repositories in China.

OSG HT-CONDOR

The computing element OSG HT-CONDOR has been integrated in the Tier-2 centre infrastructure This provides a way for VO STAR to process data using our Tier-2 with an over 90% first pass efficiency.

Software :

OS: Scientific Linux release 6.8 x86_64
GCC: gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17)
C ++: g ++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17)
FC: GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17)
BATCH: Torque 4.2.10 (home made)
Maui 3.3.2 (home made) /BATCH : Torque 4.2.10 (home made)
CMS Phedex
ALICE VObox
dCache-3.2
EOS aquamarine
UMD-4
cvmfs

 

Storage Service. 

  • dCache

The purpose of this project is to create a system for storing and extracting enormous data distributed among a big number of diverse server units ordered in a virtual system with a tree of names of files accessible by various standard methods. Depending on its configuration, dCache provides methods for data exchange with servers organized in one or two levels. It provides storage, management of space, manipulations with pools, replication, definition of critical places of loading and file restoration in case of loss. Connection to the system of file storage simulates an unlimited space for storage with direct access. An exchange The data exchange with tapes is automatic and transparent for the user. In addition to specific protocols HEP (SRM, gFTP), data in dCache can be accessible through NFSv4.1 (pNFS) and via WebDAV.

is the main software and harware system used for big data storage in JINR CICC.  We support the following dCache settings:

1st disk for 2 virtual organizations LHC CMS and ATLAS (Typically Supermicro and DELL).   Total space:  2.2 PB

2rd disk for EGI Vos & local users. Total space: 147TB

2 head node machines: 2 x CPU (Xeon E5-2683 v3 @ 2.00GHz); 128GB of RAM; 4x1000GB SAS h / w RAID10; 2x10G.7

KVM (Kernel-based Virtual Machine) for access protocols support.

  •  EOS Service.

The EOS storage service  has a scalable hierarchical namespace and the ability to access data via the XROOT protocol. EOS provides storage for physical and user use cases. The main target area of the service is the physical analysis of data, which is characterized by a large number of simultaneously working users, a significant proportion of random access to data and high speed of opening files.For user authentication EOS supports Kerberos (for local access) and X.509 certificates for grid access. To ease experiment workflow integration SRM as well as gridftp access are provided.

 EOS at  JINR CICC  is used by the experiments NICA, BM@N, MPD.
 Total space:  4PB
  • XROOTD (40GB)

XrootD is a fully universal set for fast, low latency and scalable data access, which can serve any type of data, organized in a hierarchical namespace, like a file system, based on a directory concept. At the JINR CICC, it is intended for VO PANDA.

  • AFS (Andrew File System) .

The AFS  Service provides networked  file storage for users, in particular, home directories of users, work spaces:

/afs/jinr.ru/user / ….

and project spaces:

/afs/jinr.ru/alice/, /afs/jinr.ru/atlas/, /afs/jinr.ru/bes3/, /afs/jinr.ru/cms/ ….

AFS is based on OpenAFS, an open source distributed file system that provides a client-server architecture for location-independent, scalable and secure file sharing.

7 AFS servers (12.5GB) are installed at the JINR CICC. 

  • NFS (11TB)

Partially data access is provided by software  system NFS. 5 NFS servers installed on the CICC farm.  Total space ~11TB.

 Storage Software:
  • dCache-3.2
  • Enstore 4.2.2 for tape robot
  • EOS aquamarine
  • XROOTD 3
  • CMS Phedex
  • ALICE VObox
  • dCache-3.2
  • openafs
  • EOS aquamarine

 

Information service. 

A WWW server is installed at the JINR CICC to c  reate users’ home pages and to host pages of JINR internal collaborations, user groups and individual users organized according to the rules of the WWW virtual pages. Creating a user directory ~ public_html is described    here.    Access to the user’s www-page: http://lit.jinr.ru/~username

To monitor the CICC infrastructure and its maintenance, monitoring is used:

litmon

dCache monitoring

Accouting EGI Tier2 : JINR-LCG2

RDIG-moninoring

 

WLCG.

To maintain the WLCG website at JINR (the site is a separate cluster in a distributed WLCG environment) and other international collaborations,

22 servers with the gLite system (WLCG middleware) have been installed.

In addition to the support functions of the JINR-LCG2 site itself, some of the servers implement important services and support functions for the Russian segment of the WLCG project.

 

Network and telecommunication channels. 

One of the most important components of JINR and MICC providing access to resources and the possibility to work with the big data is a network infrastructure.

Local Area Nertwork (LAN)   100Gbps
Wide Area Network (WAN)   100Gbps, 2x10Gbps, upgrade WAN to 2x100Gbps planned

The network infrastructure of JINR includes the following components:

  • The external optical telecommunication data transmission channel JINR-Moscow;
  • The fiber-optical backbone of the JINR local computer network;
  • The local computer networks of the Institute’s divisions.

The JINR network has direct connections with a number of scientific, educational and public networks at speed:

  • with GEANT network – 10 Gbps;
  • with RBnet network – 10 Gbps;
  • with networks of Moscow and St. Petersburg – 10 Gbps;
  • with Internet – 10 Mbps.

The interworking with the city networks is organized on the basis of the traffic exchange node DBN-IX. The Institute’s local network has direct connections at speeds of:

  • with the Net by Net network (former LAN-Polis) – 100 Mbps;
  • with network Contact – 1 Gbps;
  • with Telecom-IPC network – 1 Gbps.

 

Auxiliary Servers.

 

 

 

  The CICC includes several other JINR user and service servers:

  •  Mysql , Postgress, oracle database;
  • ftp;
  • e-mail;
  • DNS,
  • Nagios monitoring  and others.

These servers operate mainly on 64-bit Xeon and Opteron hardwar

These services basically work “transparently” for the user, do not require additional settings.

  Some of these servers (WWW, ftp, MySQL) provide support for laboratory (LIT) and institute services. The part (mail) is intended mainly to serve the needs of JINR users.