
|

«INFORMATICS AND APPLICATIONS» Scientific journal Volume 19, Issue 3, 2025
Content | About Authors
Abstract and Keywords
- I. N. Sinitsyn Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
Abstract: The paper is devoted to methods of conditionally optimal filter (COF) synthesis by Bayes criterion (BC) in continuous and discrete implicit non-Gaussian stochastic systems (StS) reducible to explicit. A short COF survey by mean square, energetic, and complex statistical criteria for explicit and implicit continuous and discrete StS is given. Reduction methods for smooth and nonsmooth implicit functions are developed. Exact and approximate (based on normal approximation and statistical linearization) methods for BC COF in reducible implicit continuous and discrete StS are considered. Special attention is paid to normal BC COF. The problem of equivalence of non-Gaussian noises in BC COF is discussed. Future directions of research and applications are presented.
Keywords: Bayes criterion (BC); conditionally optimal filter (COF); implicit StS; normal COF; stochastic system (StS)
- A. V. Bosov Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
Abstract: An integer-valued version of the stabilization problem for a linear stochastic differential system is considered where the drift evolves in a jumping manner determined by a Markov chain. The control objective is formalized via a quadratic cost functional. Depending on the conditions, both the full information case (the state of the chain is known) and the indirect observation case (the system state serves as an indirect observation of the unknown chain state) are possible. A distinguishing feature of the formulation lies in the integer constraints on the admissible control values. Unlike the previously solved unconstrained problem, the existence conditions for a solution are not satisfied in the "integer" formulation; therefore, an e-optimal solution is investigated. An e-optimal control can be obtained by discretizing the optimal solution of the unconstrained problem and applying mixed-integer nonlinear programming. However, the stochastic nature of the problem and the large number of switching scenarios prevent the guaranteed computational feasibility of solving it using dynamic programming. For practical implementation, a relaxation method is used: a heuristic approximation is computed as the result of an integer transformation of the e-optimal control in the unconstrained problem. Three variants of such transformations are proposed. A numerical experiment was conducted using the same applied model as in previous works on unconstrained control (position dynamics of a simple mechanical actuator). The results primarily confirm the applicability of the proposed solutions in terms of the stabilization objective and also allow for a comparison of the nature of the relaxation strategies.
Keywords: stabilization of a linear system; quadratic cost functional; dynamic programming; feedback control; Wonham filter; mixed-integer nonlinear programming; relaxation method; mechanical actuator
- O. V Shestakov Department of Mathematical Statistics, Faculty of Computational Mathematics and Cybernetics, M. V Lomonosov Moscow State University, 1-52 Leninskie Gory, GSP-1, Moscow 119991, Russian Federation, Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation, Moscow Center for Fundamental and Applied Mathematics, M.V. Lomonosov Moscow State University, 1 Leninskie Gory, GSP-1, Moscow 119991, Russian Federation
Abstract: Wavelet analysis methods are widely used to solve inverse statistical problems for inverting linear homogeneous operators. The advantage of these methods is their computational efficiency and the ability to adapt to both the operator type and local features of the estimated function. To suppress the noise in the observed data, threshold processing of the expansion coefficients of the observed function over the wavelet basis is used. One of the most effective is the block thresholding method in which the expansion coefficients are processed in groups that allows taking into account information about neighboring coefficients. Sometimes, the nature of the data is such that observations are recorded at random times. If the sample points form a variation series constructed from a uniform distribution sample over the data recording interval, then the use of threshold processing procedures is adequate and does not worsen the quality of the estimates obtained. The paper analyzes the estimate of the mean square risk of the block thresholding method and shows that under certain conditions, this estimate is strongly consistent and asymptotically normal.
Keywords: linear homogeneous operator; wavelets; block thresholding; unbiased risk estimate; random samples
- A. V. Borisov Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation, Moscow Center for Fundamental and Applied Mathematics, M.V. Lomonosov Moscow State University, 1 Leninskie Gory, GSP-1, Moscow 119991, Russian Federation
- A. N. Ignatov Moscow Aviation Institute (National Research University), 4 Volokolamskoe Shosse, Moscow 125933, Russian Federation
- V. A. Borisov Moscow Aviation Institute (National Research University), 4 Volokolamskoe Shosse, Moscow 125933, Russian Federation
Abstract: The paper focuses on designing a freight train speed profile that minimizes the expected damage from various types of railway accidents. Total losses include both damage to a considered train and potential harm to trains on adjacent tracks. The probability functions for all accident types are parameterized by the route's topology and profile, i. e., its local slope and curvature. These probability functions, along with those describing the average financial loss per derailed car, treat train speed as a control variable. The speed profile is a solution to the constrained mathematical programming problem. It is represented as a piecewise constant function, remaining constant over each segment of the route with uniform slope or curvature. This profile satisfies both instantaneous geometric and integral time constraints. Since a piecewise constant speed profile looks physically unrealistic, the paper also proposes a method for transforming it into a profile with uniformly accelerated transitions. A numerical example is provided to illustrate how different loss functions and time constraints affect the choice of an optimal speed profile.
Keywords: piecewise constant control; speed profile; expected damage; nonlinear optimization
- Yu. E. Malashenko Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
- I. A. Nazarova Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
- M. V. Kozlov Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
Abstract: The performance capabilities of a multiuser communication system under network capacity scaling is investigated. Within the framework of computational experiments, the impact of increasing edge capacities along flow paths is analyzed in terms of transmitting maximum flows. For each pair of communicating nodes, the value of maximum allowable internodal flow is determined independently under monopoly control modes. The obtained maximum flow values are used to construct and compare vectors of uniform internodal flows of all types that can be simultaneously transmitted through the network. The concept of a vector-response of the system to increasing edge capacities along transmission routes is introduced. For each reconstruction project option and for every pair of communicating nodes, the ratio of capacity increase to the growth of the maximum flow is calculated. The obtained values are reordered according to the max-min rule. Based on the vectors-response, a set of guaranteed estimates of the maximum possible changes in operational parameters is formed. The results of the computational experiments for networks with different structural characteristics are analyzed.
Keywords: multicommodity model of the communication network; guaranteed estimate in case of network capacity scaling; maximum flow transmission routes
- E. S. Sopin Peoples' Friendship University of Russia, 6 Miklukho-Maklaya Str., Moscow 117198, Russian Federation, Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
- A. I. Nazarin Peoples' Friendship University of Russia, 6 Miklukho-Maklaya Str., Moscow 117198, Russian Federation
- S. Ya. Shorgin Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
Abstract: The goal of the study is to analyze the trade-off between Physical Downlink Control Channel (PDCCH) and Physical Downlink Shared Channel (PDSCH) resources in 5G NR (New Radio) as the number of subscribers increases. Using stochastic geometry and probability theory methods, the models for Primary Resource Block (PRB) assignment strategies were developed. Subsequently, by applying queueing theory, a model jointly considering subscriber demands for both channels was formulated. Numerical results indicate that with nonsequential PRB assignment, the PDCCH size becomes the bottleneck, while with sequential assignment, a lack of PDCCH resources leads to missed scheduling opportunities. Nonsequential resource assignment potentially allows for increased PDSCH utilization but requires careful tuning of the PDCCH and PDSCH resource ratio. The proposed model enables determining the volume of resources allocated for PDCCH/PDSCH based on user needs, minimizing missed scheduling opportunities and maximizing resource utilization. The numerical investigation revealed that sequential resource assignment in the PDSCH channel leads to a significant degradation in system performance. Furthermore, for effective utilization of the nonsequential resource assignment method, dynamic adjustment of the resources allocated to the PDCCH is required.
Keywords: 5G; millimeter wave; scheduling; resource loss system
- S. F. Tyurin Perm National Research Polytechnic University, 7 Prof. Pozdeev Str., Perm 614013, Russian Federation, Perm State University, 15 Bukireva Str., Perm 614990, Russian Federation
- M. S. Nikitin Perm National Research Polytechnic University, 7 Prof. Pozdeev Str., Perm 614013, Russian Federation
- Yu. A. Stepchenkov Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
- Yu. G. Diachenko Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
Abstract: Passive fault tolerance of digital cells and devices is considered using multioption reliability taking into account features of the transistor redundancy topological simulation. A model is built that includes channel majority redundancy with the majority voters redundancy, allowing for the channel "collapse" during diagnostics, deep redundancy with redundancy at the level of individual channel's layers with special majority voters that ensure the configuration of layers into channels. The known methods are combined in a relationship that optimizes a given objective function with the required constraints. In addition, redundancy is used at the individual transistor level with varying degrees of failure protection. The topological features of such reservation are investigated by constructing various variants of circuits based on disjunctive normal, conjunctive normal, and intermediate forms. The power of the set of such variants is established. A method for searching for the topologically best variant with a large device dimension is proposed. By means of topological modeling, the preferred backup option is established based on the indicator of the consumed power and the switching delay product. Parameters examples of created topologies are given.
Keywords: fault tolerance; redundancy; majority voter; topological simulation
- A. A. Grusho Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
- N. A. Grusho Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
- M. I. Zabezhailo Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
- V. V. Kulchenkov VTB Bank, 43-1 Vorontsovskaya Str., Moscow 109147, Russian Federation
- E. E. Timonina Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
Abstract: The problem of classifying of data of very large dimension is considered, while only a limited set of training samples of such data is used. Under these conditions, the possibility of using cause-and-effect relationships in solving classification problems of the specified type is checked. Problem solving is based on the existence of cause-and-effect relationships of unknown causes with the observed partially determined effects of these causes in incoming new data. Training on small set of data is used. The problems are solved in conditions when the size of the data and the number of possible data properties tend to infinity. Asymptotic conditions for unambiguous classification of new data were found. In a particular case, the classification problem was investigated in the presence of random distortions of deterministic effects in the data. The conditions for the possibility of training without a teacher are formulated. The work shows the fundamental possibilities of applying cause-and-effect relationships in the tasks of medical diagnostics, identifying fraudulent schemes in the financial sector, and assessing situational awareness in cybersecurity.
Keywords: classification of data of large dimension; artificial intelligence; cause-and-effect relationships
- A. A. Goncharov Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
- P. V. Iaroshenko Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation, Research Computing Center Lomonosov Moscow State University, 1, bld. 4 Leninskie Gory, GSP-1, Moscow 119991, Russian Federation
Abstract: The article outlines the principal challenges encountered in the automation of annotating implicit discourse relations, analyzes the underlying causes of these challenges, and suggests possible solutions. The article examines the main stages of the process: (i) the extraction of examples with implicit discourse relations; (ii) the delimitation of relational argument boundaries; and (iii) the selection of features for annotation of the extracted fragments. The results of applying the method of search with exclusion in parallel texts are presented along with a critical assessment of its limitations. Two factors significantly hindering the automation of argument identification in text spans with implicit discourse relations are analyzed: the considerable variability in argument length and the noncontiguous nature of arguments, which may be interrupted by intervening tokens. A comprehensive analysis of methods for automating feature selection for the linguistic data is provided. It has been demonstrated that even the processing of formal features may require the involvement of experts. Furthermore, while some semantic features are amenable to partial automation, others currently require manual annotation. The conclusions are illustrated by examples from the corpus.
Keywords: linguistic annotation; discourse relations; logical-semantic relations; implicitness; parallel texts
- I. M. Zatsman Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences, 44-2 Vavilov Str., Moscow 119333, Russian Federation
Abstract: The aim of the paper is to describe information models of the processes of discovering in texts linguistic knowledge about the studied language units based on the Ackoffs hierarchy proposed by him in 1989. The components of the hierarchy considered in the paper are data, information, and knowledge. Ackoffs principal outcome consists in dividing the semantic content of words denoting the components of the hierarchy and a general formulation of the problem of describing component transformations. Ackoffs description of this problem has generated discussions for decades. At the same time, the problem of describing the component transformations still remains unsolved. Without claiming to solve this problem in general, the paper proposes an approach to solving its particular cases for the subject domain of knowledge discovery in texts based on detailing the semantic content of the words "data," "information," and "knowledge." Based on the proposed approach, the paper describes two models of knowledge discovery in texts, each of which specifies a list of transformations of the detailed components of Ackoffs hierarchy. The first model became the basis for designing technologies without taking into account unsuccessful outcomes of knowledge discovery in texts and the second model - for designing technologies taking them into account. The experiments conducted showed that the designed technologies using supracorpora databases provide for discovering in texts already known and new linguistic knowledge from texts as well as formation of new classifications.
Keywords: Ackoffs hierarchy; data; information; knowledge; knowledge discovery; classification
|