
«Systems and Means of Informatics» Scientific journal Volume 35, Issue 4, 2025
Content | About Authors
Abstract and Keywords
- L. P. Plekhanov
- Yu. G. Diachenko
- D. V. Khilko
- G. A. Orlov
Abstract: The article considers the problem of self-timed (ST) digital circuit design automation. Self-timed circuits are an alternative to the synchronous ones.
In spite of significant advantages, especially in terms of operational reliability in a wide range of operating conditions under the influence of unfavorable factors,
ST circuits have not yet found wide application. In part, this is due to the complexity of their design which requires a specific approach and consideration of the ST circuit's functioning discipline features. The greatest difficulty is the formalization and automation of sequential ST unit synthesis. For this purpose, it is proposed to use the template method. It includes an analysis of the synchronous counterpart's original description of the synthesized ST circuit using Yosys, the open-source logical synthesizer of synchronous circuits, searching for fragments implemented by units with memory, and replacing them with preprepared templates, namely, ST Verilog descriptions of sequential units adequate to the prototype in terms of operational features. The templates contain the synchronous and ST implementations of the corresponding units. The article provides template examples and describes the method of their application in the process of converting the original synchronous description of the synthesized circuit into an ST Verilog description. Substituting templates into the synthesized circuit description eliminates the need for their individual synthesis taking into account the specifics of the ST circuits. The proposed approach ensures minimal hardware costs and optimal performance and guarantees the ST nature of the resulting circuit implementations of digital units.
Keywords: self-timed circuits; automated logic synthesis; template; sequential circuits; conversion; Verilog
- V. A. Beschastnyi
- A. M. Turlikov
- N. V. Stepanov
- V. S. Shorgin
Abstract: State-of-the-art Internet of Things (IoT) applications are characterized by exclusive requirements on latency which are much higher than those defined for 5G Massive Machine-Type Communications in ITU-R M.2410 (e. g., less than 10 ms). For this reason, 3GPP (3rd Generation Partnership Project) has defined several procedures to reduce access latency in the 5G NB-IoT systems. However, these procedures are purely ALOHA-type and do not use information that may be available on the NB-IoT base station. In this paper, two extensions of the random access procedure based on tree-splitting algorithms have been considered. The results show that the algorithm with blocked access is characterized by a 25%- 35% higher throughput compared to the free-access algorithm. At the same time, the considered algorithms can be effectively employed in the case when the traffic load does not exceed 70%-80% of the available throughput.
Keywords: 5G; NB-IoT; RACH; delay; tree splitting
Abstract: In polymorbid and polyetiological diseases with complex clinical presentations and phasic progression such as acute pancreatitis, the development of infectious complications combined with inadequate treatment strategies and surgical interventions can lead to mortality rates reaching 100%. Improving outcomes for patients with acute pancreatitis requires the early prediction of infectious complications and potential lethal outcomes to identify the most severe patient cohort for timely comprehensive intensive therapy and adequate surgical treatment. Current diagnostic methods and approaches are maximally informative only by the end of the second to third days. However, the highest therapeutic efficacy is observed within the first 24-48 h. Thus, transitioning to personalized integrated prediction algorithms is critical. The paper presents the results of the developing a heterogeneous field of models within the transformational model designed as a "neuro-fuzzy-case-based expert system. This model is the part of the future cooperative self-adjusting hybrid intelligent systems for personalized patient state assessment (the case of acute pancreatitis).
Keywords: hybrid intelligent system; neuro-fuzzy system; precedent expert system; predicting in medicine; acute pancreatitis
- A. V. Kan
- Alexander A. Khoroshilov
- I. A. Chechulin
- S. A. Stupnikov
- Yu. V. Nikitin
- Alexey A. Khoroshilov
Abstract: The paper considers the functional architecture of the system of semantic analysis of scientific and technical documents. The system belongs to the category of intelligent platforms for the aggregation and analysis of scientific and technical documents. The main technological solution of the system is to create a set of declarative tools adaptable to various domains of science and technology, implementing the concept of phraseological conceptual analysis of texts. This assumes combining the results of graphemic, morphological, conceptual, and semantic and syntactic analysis of texts into a formalized metadata model. The subsystems and modules constituting the architecture are considered, their functions and interaction are described. One of the most important components of the system is a set of morphological, semantic, syntactic, and conceptual dictionaries that are automatically generated and customized to domains of knowledge, ensuring the relevance of a set of linguistic resources.
A classification of systems for semantic analysis of texts and the problems they solve are considered. The system being developed is positioned in the classification and its distinctive features are distinguished.
Keywords: phraseological conceptual analysis of texts; semantic analysis of texts; scientific and technological text processing
- M. M. Charnine
- N. V. Somin
Abstract: A method for calculating the positions, velocities, and evolutionary trajectories of keywords in the vector space of a static language model is proposed.
The semantic distance between word vectors at times ti and t2 is defined as the cosine distance between these vectors. The rate of semantic change is calculated as the semantic distance divided by time (t2 - t1). The rate of semantic change expresses how quickly the meaning/semantics of a word, its context, position in the vector space, and semantically close words change. The method allows one to calculate the velocities and evolutionary trajectories of topics representing a set of several related keywords. To calculate the velocities and trajectories, special evolutionary labels are inserted into the analyzed source text next to the words from the topic of interest. The case of the velocities and trajectories of keywords in the field of "machine learning" obtained from the PubMed library is considered. Keyword vectors and their changes over time are calculated using the Word2Vec neural network. A semantic map is presented that allows one to visually assess evolutionary trajectories and velocities. It is based on the PCA (Principal Component Analysis) algorithm which allows one to obtain a projection of trajectories onto a two-dimensional plane.
Keywords: rate of semantic change; evolutionary trajectories; vector space; static language model
- A. V. Panteleev
- N. V. Turbin
- I. S. Nadorov
- N. O. Kononov
Abstract: The article explores the application of modern metaheuristic optimization algorithms to determine the parameters of new mathematical models describing experimental results. The subject of the study was the property of composite materials used in aircraft construction to lose fatigue stiffness during operation. For mathematical modeling of stiffness degradation, a system of three ordinary differential equations describing the characteristic stages of the process is proposed. To find the parameters of the right-hand sides of the equations, two constrained optimization problems are solved minimizing the deviations of the equation solutions from known composite material test results. The solutions to the equations are found using classical explicit numerical integration methods of varying orders of accuracy. For parametric identification, i. e., finding the values of the parameters of the right-hand sides of the differential equations based on test results, it is proposed to use metaheuristic optimization methods: a modified method simulating the behavior of a swarm of moths as well as a random search method with sequential reduction of the study area. Numerical results of the study of a specific composite material are presented illustrating the effectiveness of the proposed approach.
Keywords: metaheuristic optimization algorithms; moth swarm behavior simulating method; damage model; composite materials
- O. V. Druzhinina
- E. R. Korepanov
- I. V. Makarenkova
- V. V. Maksimova
- A. A. Petrov
Abstract: The paper is devoted to the study of the problem of constructing and analyzing a model for predicting the technical state of axle boxes of railway cars based on the use of artificial intelligence methods. The relevance of this problem is related to the need to create and improve high-tech and energy-efficient data analysis tools for diagnosing the technical condition of elements and systems of transport infrastructure. It is proposed to use the LSTM (Long ShortTerm Memory) neural network architecture to predict the state when processing sequential data (time series). Synthetic datasets for neural network training are generated using the developed simulation stochastic model of thermal control of axle boxes. The performed computer modeling in the PyTorch environment allowed to conduct a comparative analysis of the results of computational experiments and to evaluate the effectiveness of LSTM training in the framework of the problem under consideration. The constructed predictive analytics model can serve as the basis for the ABITech Thermal Forecast Module, a software package for diagnosing the technical state of axle boxes.
Keywords: data analysis; predictive analytics; computer modeling; neural networks LSTM; time series; machine learning algorithms; technical state assessment; intelligent transport systems
- A. S. Stepanov
- L. V. Illarionova
- K. N. Dubrovin
- E. A. Fomina
- A.L. Verkhoturov
Abstract: The article considers the possibility of using weekly composite images from the Meteor-M No. 2 satellite to classify arable lands in Khabarovsk Krai. For four vegetation classes (soybeans, grain crops, perennial grasses, and fallow land), average Normalized Difference Vegetation Index (NDVI) seasonal variation series were constructed for municipal districts in the south of Khabarovsk Krai in 2024 and the main characteristics - the NDVI maximum values and the day of the maximum - were calculated. Statistically significant differences in the indicators for the average NDVI time series for different vegetation classes were revealed (p < 0.0001). Using validated data from Khabarovsk KRAI, a classification of arable lands in the Bikinsky, Vyazemsky, and Lazovsky Districts was conducted using machine learning (the Random Forest algorithm). The average accuracy of the method based on the results of three-fold cross-validation was equal to 87.6%. For different vegetation classes, the F1 metric value ranged from 0.61 to 0.93. Arable land maps were created for the southern regions of Khabarovsk Krai. It was found that fallow land accounts for over 30% of the region's total arable land area, while soybean crops accounted for 48% in 2024. The mapping results were entered into the developed geographic information system.
Keywords: cropland; machine learning; classification; satellite monitoring; GIS
Abstract: The main spatial concepts of spatial data representation and a number of modern methods for analyzing and extracting geographical knowledge from geographic databases including current trends in the development of geographic artificial intelligence (AI) are presented. Searching for knowledge in geographical databases is a nontrivial process that requires understanding of fundamental geographical concepts taking into account the features of spatial or spatiotemporal representation of geoobjects and related specialized algorithms and which is difficult to implement using direct transfer of traditional methods of data mining. Geographical AI is a promising technology for intelligent spatial data processing for solving various tasks (such as clustering, classification, segmentation, interpolation, etc.), especially in the context of the analysis of spatial and temporal patterns. The results obtained should serve as a starting point for the development of new approaches to the extraction of geographical knowledge.
Keywords: geodata; geographical databases; methods of extracting geographical knowledge; geographical artificial intelligence
- I. M. Adamovich
- O. I. Volkov
Abstract: The article is devoted to the development of means of concrete historical investigation supporting. The proposed approach allows one to determine the priority directions of information search by automatically forming the reasonable hypotheses. A special feature of the approach is its focus not on isolated historical facts but on the dynamics of the development of concrete historical processes. The approach is based on the methods of cluster analysis of time series. The article describes the features of time series reflecting information available in historical sources on changes in certain objects and phenomena in the past. The algorithms that best match these features have been identified. Special attention is paid to the definition of a specific similarity measure for such time series. An example of data reflecting the peculiarities of Old Believer entrepreneurship in the Russian Empire is considered in detail and it is shown how, using the described approach, it is possible to make an informed assumption about the assignment of the object of research to Old Believers.
Keywords: concrete historical investigation; hypothesis formation; clustering; time series; similarity measure
- A. A. Grusho
- M. I. Zabezhailo
- V. V. Kulchenkov
- E. E. Timonina
Abstract: Analysis of data for the presence of effects of an unknown but present cause arises in many tasks and one of the examples given in the work is related to the search for signs of fraud among many recipients of a consumer loan in a bank.
To build the initial data, a method was chosen in which signs of fraud appear in transactional activity after receiving a loan, namely, the signs are based on how the funds are withdrawn. The example given is a special case of situations when in a limited set of precedents of data having a large dimension, the effects of one cause are present and repeated. Under these conditions, the task of finding repetitions of consequences of effects is of great importance. An algorithm for such a search has been built, which has a complexity less than quadratic. The complexity of the constructed algorithm for finding all coincidences in m ordered precedents does not exceed mN where N is the length of all precedents. Given the complexity of ordering each precedent when there is the initial ordering of the entire set of characteristics, the complexity of solving the problem does not exceed mN log2 N.
Keywords: complexity of the classification task; machine learning; cause and effect relationships; searching for coincidences in the sequence of sets
|