CICC Сomputing Clusters

CICC  cluster .

The CICC cluster consists of the following hardware-software components.

Interactive cluster:

  • user interface lxpub[0-05].jinr.ru,
  • gateway for external connections lxui [01-04] .jinr.ru  and

Interactive cluster is the main and only place for all of users interactive activity: applications development and debugging, diverse word-processing, Internet surfing, etc.  Users are not allowed to run tasks that last longer than 30 minutes. Such processes would be automatically “killed” and an e-mail sent to a user with a proper message.

Common computing farm:

The computing farm is intended both for local CICC users and for international projects.  Added virtual organizations:  NICA: MPD, BM@N,SPD; BAIKALGVD , COMPASS (WLCG CERN), ILC (WLCG http://www.linearcollider.org/) . 

Several services are installed and configured for JUNO :

  • CE JUNO – will be allowed to run jobs at farm
  • VOMS server, a mirror of the main VOMS in China;
  • CVMSF stratum-1 server, for support access to JUNO software repositories in China.

In 2020, work was done to change the batch server to the Slurm type (Simple Linux Utility for Resource Management )  – workload manager for managing compute jobs on High Performance Computing clusters. . Works and is used by CE. Changed Computing Element (CE) for WLCG type – ARC6 (internal slurm queue). (CREAM-CE is no longer supported by the central task launcher).

Jobs launch, execution control, and result forwarding are provided by system  SLURM For computing jobs debugging the interactive mode may be used.

To use the SLURM system:

1. The user must be registered with Kerberos and AFS.

2. For data storage, the user can be allocated space in the EOS distributed file system, /eos/user/<u>/<user>.

3. To run jobs in SLURM, the user needs to register in the SLURM database

Instructions for the SLURM system are here.

 

CVMFS :

The CVMFS  is used to deploy large software packages of collaborations working in the WLCG. Used to run experimental data processing applications. Files and directories are placed on standard web servers and mounted on the universal  /cvmfs namespace.

To place the software in CVMFS, you need to apply for the creation of a new repository to the CICC administrator (grom@jinr.ru ) and send
id_rsa.pub of the user who will accompany the repository.

Short  CVMFS instruction here .

We a have versions of the asys, borexino , darkside, dvl, fobos, juno , nica(BM@N,MPD ),  star, baikalgvd, cms, dayabay, er, genetics, lgd, panda,
biohlit, danss, dstau, flnp-admin, monument, scg 
now.

OSG HT-CONDOR:

The computing element OSG HT-CONDOR has been integrated in the Tier-2 centre infrastructure This provides a way for VO STAR to process data using our Tier-2 with an over 90% first pass efficiency.

 

 Software :

CentOS Scientific Linux release 7.9

GCC: gcc (GCC) 4.4.7 C ++: g ++ (GCC) 4.4.7

FC: GNU Fortran (GCC) 4.4.7  * FLAGS: -O2 -pthread -fPIC -m32

BATCH:  SLURM with adaptation to kerberos and AFS

FairSoft

FairRoot

MPDroot 

ALICE VObox

EOS aquamarine

WLCG

FTS

UMD-4