MICC JINR Multifunctional
Information and Computing
Complex

RU

Young scientists research

 

assets/User-Content/images/About/PAC2018_Aleksandrov.jpg

 
 
Geometry database for the CBM experiment
Akishina E.P.1, Alexandrov E.I.1, Alexandrov I.N.1, Filozova I.A.1, Friese V.2, Ivanov V.V.1,3
aleksand@jinr.ru
JINR, Dubna, Russia
GSI, Darmstadt, Germany
National Research Nuclear University “MEPhI”, Moscow, Russia
 
The paper describes the Geometry Database (Geometry DB) for the CBM experiment. The Geometry DB supports the CBM geometry which describes the CBM experimental setup at a detail level required for simulation of particles transport through the setup using GEANT3.
The main purpose of this database is to provide convenient tools for:
1) managing the geometry modules (MVD, STS, RICH, TRD, RPC, ECAL, PSD, Magnet, Beam Pipe);
2) assembling various versions of the CBM setup as a combination of geometry modules and additional files (Field, Materials);
3) providing support of various versions of the CBM setup. The members of the CBM collaboration may use both GUI (Graphical User Interface) and API (Application Programming Interface) tools to work with the Geometry Database.

 

http://159.93.221.13/assets/User-Content/images/About/PAC2018_Ayriyan.jpg

Phase Transition Model Mimicking the Pasta Phase and Robustness of the Phenomenon of the Third Family of Compact Stars
Alexander Ayriyan
ayriyan@jinr.ru
JINR, Dubna, Russia
 
A simple mixed phase model mimicking “pasta” type mixed phase in the quark-hadron phase transition is developed and applied to relativistic neutron star configurations. The model is parametrized by the use of additional pressure corresponding to the impact of structural effects in the mixed phase to the critical pressure of Maxwell construction. The robustness of third family solutions for hybrid compact stars is investigated where a quark matter core that correspond to the occurrence of high-mass twin stars against a softening of the phase transition by means of mimicking the effects of pasta structures in the mixed phase. Comparing results with constraints given by the binary neutron star merger GW170817 show that at least the heavier of the neutron stars could be a member of the third family of hybrid stars.

 

JINR Cloud Services
Balashov N.A., Baranov A.V., Kutovskiy N.A., Mazhitova Ye.M., Semenov R.N.
balashov@jinr.ru
JINR, Dubna, Russia
 
The paper reviews the main new features and development directions of the JINR cloud services: cloud.jinr.ru and git.jinr.ru. Within the framework of the first service development its new architecture is presented, which includes changes in network configuration and storage system as well as transition to another high availability configuration of the front-end nodes. Listed changes will allow considerably increase users’ experience in terms of significant reduction of virtual machines deployment and migration times, and will also increase the reliability of data storage and VMs operation. The optimization of the workload distribution in the JINR cloud are considered too. The paper also contains a review of the works dedicated to the integration of the JINR Member state clouds. Regarding the service git.jinr.ru three main new features are reviewed: the Git Large File Storage (LSF) system, the GitLab Pages service and the monitoring system based on Prometheus and Grafana.

 

SFF analysis of the vesicular systems structure: MPI implementation of the fitting procedure and numerical results
Bashashin M.V.1,2, Zemlyanaya E.V.1,2, Sapozhnikova T.P.1, Kiselev M.A.2,3
bashashinmv@jinr.ru
1 LIT, JINR, Dubna, Russia
2 Dubna State University, Dubna, Russia
3 FLNP, JINR, Dubna, Russia
 
The separated form factors method (SFF) is an effective approach of investigation of structure of polydispersed systems of phospholipid vesicles on the basis of the small angle scattering data analysis. In this framework, basic parameters of vesicular system are determined by means of minimization of a discrepancy between experimental data on intensity of small angle scattering and the results of the SFF calculations. The minimization procedure is based on the generalized least square method which was employed in the code FUMILI of the library JINRLIB. In this contribution, we utilize the parallel MPI-version of this code, PFUMILI. Effectiveness of parallel implementation is tested on the cluster HybriLIT. Results of numerical analysis of the small angle neutron scattering data collected at the YuMO small-angle spectrometer of the Frank Laboratory of Neutron Physics are presented.
The work is supported by the Russian Scientific Foundation (project No. 14-12-00516).

 

Towards to the J/ψ→e+e triggering with TRD in the CBM experiment
Derenovskaya O.1, Ablyazimov T.1,2, Ivanov V.1,3
odenisova@jinr.ru
1 JINR, Dubna, Russia
2 Gesellschaft für Schwerionenforschung mbH (GSI), Darmstadt, Germany
3 National Research Nuclear University “MEPhI”, Moscow, Russia
 
The first steps towards to the J/ψ→e+e triggering in the CBM experiment is presented. The Transition Radiation Detector is most suitable for solving this task. TRD should yield reliable electron identification, a high pion suppression level, a reconstruction of trajectories of charged particles passing through the detector.

 

Collection and analysis of resource usage data from the LIT JINR cloud infrastructure
Kadochnikov I.
kadivas@jinr.ru
JINR, Dubna, Russia
JINR has an infrastructure-as-a-service cloud based on OpenNebula for providing local users with resources, as well as international cloud and GRID computing projects. Many cloud use-cases utilize cloud resources unpredictably and unevenly. The problem of optimizing resource usage arises unavoidable for both physical and virtual resources. However, the flexibility of cloud infrastructures provides a unique way to mitigate this problem by using “overcommitment”, that is giving more virtual resources than exists on the server physically.
The monitoring system for the cloud infrastructure usage was created as part of the cloud dispatcher project. It consists of monitoring agents on every physical node, a central server to collect and store data, a backup storage server and a web-interface to visualize cloud resource usage history. Аgents to monitor KVM and OpenVZ hypervisors over SNMP were developed, as well as collection modules for these metrics for Nagios/Icinga2.
The information collected will aid in selecting the optimal cloud resource management strategy. Analysis of physical node load shows that implementing overcommitment and automatic migration can be very effective. Analyzing the virtual machine usage history lead to proposing a prospective strategy of grouping virtual machines into classes based on expected mean usage, and providing each class with a group of dedicated physical nodes with a fitting overcommitment ratio.

 

Simulation of interprocessor interactions for MPI-applications in the cloud infrastructure
Nechaevskiy A.V., Pryahina D.I.
pry-darya@yandex.ru
LIT, JINR, Dubna, Russia
 
А new cloud center of parallel computing is to be created in the Laboratory of Information Technologies (LIT) of the Joint Institute for Nuclear Research (JINR) what is expected to improve significantly the efficiency of numerical calculations and expedite the receipt of new physically meaningful results due to the more rational use of computing resources. To optimize a scheme of parallel computations at a cloud environment it is necessary to test this scheme for various combinations of equipment parameters (processor speed and numbers, throughput of а communication network etc.). As a test problem, the parallel MPI algorithm for calculations of the long Josephson junctions (LDJ) is chosen. Problems of evaluating the impact of abovementioned factors of computing mean on the computing speed of the test problem are solved by simulation with the simulation program SyMSim developed in LIT.
The simulation of the LDJ calculations in the cloud environment enable users without a series of test to find the optimal number of CPUs with a certain type of network run the calculations in a real computer environment. This can save significant computational time in countable resources. The main parameters of the model were obtained from the results of the computational experiment conducted on a special cloud-based testbed. Computational experiments showed that the pure computation time decreases in inverse proportion to the number of processors, but depends significantly on network bandwidth. Comparison of results obtained empirically with the results of simulation showed that the simulation model correctly simulates the parallel calculations performed using the MPI-technology. Besides, it confirms our recommendation: for fast calculations of this type, it is needed to increase both, – the number of CPUs and the network throughput at the same time. The simulation results allow also to invent an empirical analytical formula ex-pressing the dependence of calculation time by the number of processors for a fixed system configuration. The obtained formula can be applied to other similar studies, but requires additional tests to determine the values of variables.

 

Tier-1 service monitoring system
Pelevanyuk I.S.
pelevanyuk@jinr.ru
LIT, JINR, Dubna, Russia
 
In 2015, a Tier-1 center for processing data from the LHC CMS detector was launched at JINR. The large and growing infrastructure, pledged QoS and complex architecture all make support and maintenance very challenging. It is vital to detect signs of service failures as early as possible and enough information to react properly. Apart from the infrastructure monitoring there is a need for consolidated service monitoring. The top-level services that accept jobs and data from the Grid depend on lower-level storage and processing facilities that themselves rely on the underlying infrastructure. The sources of information about the state and activity of the Tier-1 services are diverse and isolated from each other. The decision was made to develop a new monitoring system. The goals are to retrieve a monitoring information about services from various sources, to process the data into events and statuses, and to react according to a set of rules, e.g. to notify service administrators or to restart a service. Nowadays the monitoring system aggregates information from different sources and can determine the status of a particular component.

 

HLIT-VDI – a new service of the HybriLIT ecosystem for work with applied software packages
Matveev M., Podgainy D., Streltsova O., Torosyan Sh., Zrelov P., Zuev M.
shushanik@jinr.ru
JINR, Dubna, Russia
A new service - HLIT-VDI – has been developed for shared use of applied software packages on the HybriLIT cluster using GUI (graphical user interface). By means of this service, it is now possible to work with applied software packages such as Wolfram Mathematica, Maple, Matlab, GEANT4, etc. via remote access to the virtual machines (VM) in the framework of the HybriLIT cluster. The developed service allows carrying out computations in the frames of VMs and massive computations using the resources of the cluster.

 

Data management system of the UNECE ICP Vegetation Program
Uzhinskiy A.
zalexandr@list.ru
JINR, Dubna, Russia

The aim of the UNECE International Cooperative Program (ICP) Vegetation in the framework of the United Nations Convention on Long-Range Transboundary Air Pollution (CLRTAP) is to identify the main polluted areas of Europe, produce regional maps and further develop the understanding of the long-range transboundary pollution. The Data Management System (DMS) of the UNECE ICP Vegetation consists of a set of inter-connected services and tools deployed and hosted at the Joint Institute of Nuclear Research (JINR) cloud infrastructure. DMS is intended to provide its participants with modern unified system of collecting, analyzing and processing of biological monitoring data. General information about DMS and its abilities are presented.        


 

Nuclotron beam momentum estimation in BM@N experiment
Voytishin N.N.
voitishinn@gmail.com
LIT, JINR, Dubna, Russia
 
The Baryonic Matter at Nuclotron (BM@N) is the first step of the realization of the Nuclotron-based Ion Collider fAcility (NICA) mega-science project. The Nuclotron facility is able to provide diverse types of beams with the kinetic energies from 1 to 6 GeV per nucleon. The BM@N experimental setup represents a complex structure which is meant to become a precise tool for the study of strange hyperon and hyper-nuclei production yields and ratios. The accuracy assessment of the main detector systems performance is one of the main tasks at the moment. The results of the beam momentum reconstruction procedure using two tracking detector systems Multi-Wire Proportional Chambers and Drift Chambers are presented.