Институт проблем информатики Российской Академии наук
Институт проблем информатики Российской Академии наук
Российская Академия наук

Институт проблем информатики Российской Академии наук



«Systems and Means of Informatics»
Scientific journal
Volume 33, Issue 3, 2023

Content | About  Authors

Abstract and Keywords

TO THE 40TH ANNIVERSARY OF THE DECREES OF THE CENTRAL COMMITTEE OF THE CPSU AND THE COUNCIL OF MINISTERS OF THE USSR ON THE FURTHER DEVELOPMENT OF WORK IN THE FIELD OF COMPUTER TECHNOLOGY, INCLUDING AT THE USSR ACADEMY OF SCIENCES
  • V. N. Zakharov

Abstract: The year 2023 marks the 40th anniversary of the establishment of the Institute of Problems of Informatics of the USSR Academy of Sciences, currently part of the Federal Research Center "Computer Science and Control" of the Russian Academy of Sciences. The Institute was established in accordance with a decree of the Central Committee of the CPSU and the Council of Ministers of the USSR and a decree of the Council of Ministers of the USSR dated July 29, 1983. The article discusses some prehistory of the development of computer technology in the USSR before these decrees were issued, the return of the subject of work on computer technology and computer science to the USSR Academy of Sciences, the esteblishment of the Department of Computer Science, Computer Technology, and Automation of the USSR Academy of Sciences, and the main provisions of the decrees. Brief information about the development of IPIAN from the moment of its creation to the present time is also provided.

Keywords: computer engineering; computer science; Academy of Sciences of the USSR; Central Committee of the CPSU; Council of Ministers of the USSR; Institute of Informatics Problems

TIME SERIES MONOTONIC TREND ANALYSIS
  • M. P. Krivenko

Abstract: The problem of identifying changes in the characteristics of the time series under study during the observation period is considered. Almost always, the trend arises against the background of the statistical dependence of the elements of the time series. The dependence of individual observations becomes a nuisance factor. When constructing assumptions about changes, preference is given to models of a general nature: the trend of probabilistic characteristics is a monotonic function of an unknown form from the observation number (monotonic trend).
The need to take into account the combination of both operating factors in the time series model and the need to obtain workable methods of data processing lead to the following scheme of actions: to take as a basis the already developed procedures, then, if possible, to adjust the conditions for their correct application to the required ones, and, finally, to use adapted options. The analysis of methods for processing the nuisance factor is carried out: neutralization of dependence, accounting for dependence, and generalization of the time series model. As an example, the problem of monitoring the stable operation of a two-processor task processing system with a random selection of the number of required processors is considered. The possibilities and limitations of the proposed methods are demonstrated.

Keywords: monotone trend; decorrelation; ARIMA models

DISPATCHING IN NONOBSERVABLE PARALLEL QUEUES WITH FINITE CAPACITIES
  • M. G. Konovalov
  • R. V. Razumchik

Abstract: Consideration is given to the problem of optimal centralized routing in one simple model of volunteer computer systems. The model consists of the finite number of parallel finite-capacity FIFO (first in, first out) queues, each with a single server, and a single dispatcher. Homogeneous jobs arrive one-by-one in a stochastic manner to the dispatcher which must instantly make a routing decision based solely on its previous routing decisions and, possibly, the time instants at which those decisions were made. Although no feedback from the queues is available, the dispatcher knows the maximum queues' capacities, all service rates, the distribution of job sizes, and the inter-arrival time distribution. The new algorithm for generating routing decisions "on the fly" which optimizes the given cost function of the stationary mean sojourn time and the loss probability is proposed.

Keywords: parallel service systems; dispatching; resource allocation policies; control under incomplete observations; program control

PRINCIPLES OF DEVELOPMENT OF A SOFTWARE SUITE FOR THE NETWORK OPERATIONS CENTER OF THE NATIONAL RESEARCH COMPUTER NETWORK OF RUSSIA
  • A. G. Abramov
  • V. A. Porkhachev

Abstract: The paper is devoted to the general principles and methods of development and improvement of the software suite of the digital Network Operations Center of the new generation National Research Computer Network of Russia (NOC NIKS). The paper studies and systematizes the tasks solved by NOCs of large telecommunications networks including those specific to national research and education networks. Based on the results, the main tasks and functions of NOC NIKS are highlighted, the arguments and justifications for creating your own software solution are given, its novelty, distinctive competitive advantages, and unique functions are indicated, and possible directions for development and practical use are proposed. Special attention is paid to the presentation and discussion of the features of the architecture of the program core of the suite as well as interconnected basic and additional components including a set of functional and auxiliary modules as part of the core, microservices for interacting with external systems, and external services.

Keywords: National Research Computer Network, NIKS; Network Operations Center, NOC; network monitoring and management; software suite

EFFICIENCY OF THE REDUCTION ALGORITHMS IN THE BIN PACKING PROBLEM
  • E. B. Barashov
  • A. V. Egorkin
  • D. V. Lemtyuzhnikova
  • M. A. Posypkin

Abstract: The bin packing problem is a well-known combinatorial problem consisting in finding the minimum number of fixed-size bins to hold a given set of items with known weight. Despite its simple formulation, the problem is NP-hard and exact methods for its solution are often inefficient in practice. Therefore, the research and development of approximate methods for solving the bin packing problem is of great importance. The paper investigates a class of approximate algorithms consisting of the sequential application of reduction and one of four "greedy" algorithms. The effect of reduction on the quality of the solutions obtained and the running time of the methods under consideration is evaluated. The algorithms are compared on four data sets according to several criteria responsible for the quality of the obtained solutions and the time required to find them. The experimental study shows that the efficiency of the reduction varies widely and depends strongly on the problem coefficients.

Keywords: reduction; bin-packing problem; discrete optimization; greedy algorithms; solution improvement by two-sided placement

RESEARCH METHODOLOGY FOR CREATING FUNCTIONS OF SCANNING AND PRINTING SYSTEMS
  • I. V. Safonov
  • I. A. Matveev

Abstract: Scanning, copying, and printing devices are widely used in industry, government, and for personal needs. Combined with application and system software, these devices constitute scanning and printing systems. The formulation of a research methodology for creating the functions of such systems allows one formalizing and speeding up the development of new image processing methods.
A review of the methodologies of scientific research, research and development in the industry, and software development has been done. The fact of implementation of the research result in the product was chosen as a success criterion. Based on a retrospective analysis, the factors that prevented the implementation of the results were identified. At the research stage, these factors can be considered as risks. Using the analytic hierarchy process, risks are ranked by their importance.
A research methodology is proposed for creating new functions for scanning and printing systems based on risk analysis and management.

Keywords: research methodology; risk management; analytic hierarchy process

PROBLEMS OF FORMATION OF A SYSTEM OF COORDINATED LOCAL GEOGRAPHICAL ONTOLOGIES
  • D. A. Nikishin

Abstract: The essence of geontology as a means of representing geographical knowledge and the tasks it solves are considered. A multilevel system of local geontologies is presented considered as a base for promising geoinformation systems. The problems of implementing such a system are highlighted: the development of a system of conventionally understood entities (geoconcepts), properties and classifications based on a coordinated approach to structuring and formalizing the simulated phenomena; ensuring spatial and logical connectivity of the components of the terrain model; ensuring mutual consistency of local ontologies (interaction interfaces) in the aspects of generalization and variation as well as, if necessary, alternativeness and temporality to solve the problems at the "junction" of local geontologies.

Keywords: gerontology; geoinformation systems; modeling of geographical phenomena; system of local gerontology; dynamic geontology

SOME APPROACHES TO THE ANALYSIS OF FACTORS AFFECTING THE INFORMATION SECURITY OF ARTIFICIAL INTELLIGENCE SYSTEMS
  • A. A. Zatsarinny
  • A. P. Suchkov

Abstract: The article deals with the problems of ensuring the necessary level of information security of artificial intelligence systems which is one of the key conditions for their introduction into state practice. It is noted that the main specific threat is posed by the training procedures of artificial intelligence systems during which not only the information base but also the algorithms of intelligent conclusions are configured and can change. The information technology of analysis of certain factors is proposed which makes it possible to identify the features of ensuring the information security of artificial intelligence systems and characterize the totality of existing threats. Specific issues of information protection of artificial intelligence systems and a number of measures to ensure the information security of these systems are discussed.

Keywords: information security; artificial intelligence systems; training procedures; types of information protection

SOME CHALLENGES OF CRITICAL INFRASTRUCTURE INFORMATION SECURITY MONITORING
  • A. A. Grusho
  • N. A. Grusho
  • M. I. Zabezhailo
  • A. A. Zatsarinny
  • E. E. Timonina

Abstract: The critical infrastructure information security should be viewed from a unified system perspective based on a unified assessment of the value of information. This implies a unified approach to assessing subsystems in terms of their ability to use infrastructure interdependencies to detect information security violations. The paper discusses cyber interdependencies in the context of information security. The main idea of the approach is to observe the consequences of the cause which is an unobservable source of actions to violate information security.

Keywords: critical infrastructure; information security; monitoring mechanisms of information security

MACHINE TRANSLATION BY ChatGPT: MONITORING OF OUTCOME REPRODUCIBILITY
  • A. Yu. Egorova
  • I. M. Zatsman
  • V. O. Romanenko

Abstract: The paper considers the question of monitoring the reproducibility of the results performed by ChatGPT chatbot over a time interval for solving a mathematical task, generating code, and resolving a visual puzzle. A brief review of the experimental data for monitoring the reproducibility of the results for these three applications is given. The presented data show that the outcomes of ChatGPT when solving the same problem may change over time. At the same time, significant changes may occur in a relatively short period of time which emphasizes the need to monitor and evaluate the behavior of the ChatGPT chatbot. The main goal of the paper is to study the reproducibility of the machine translation outcomes performed by ChatGPT over a given time interval. The experimental data obtained during the monitoring of outcome reproducibility demonstrate some changes in the results including the decline in translation quality of the same text fragments over a time interval. To monitor the outcome reproducibility and evaluate the behavior of ChatGPT, a previously developed method for interval evaluation of machine translation is used.

Keywords: ChatGPT applications; monitoring; outcome reproducibility; machine translation; interval evaluation

METHODOLOGICAL APPROACH TO QUALITY CONTROL OF DATA TRANSFORMATION IN AN INTERDISCIPLINARY DIGITAL PLATFORM
  • A. A. Zatsarinny
  • A. P. Shabanov

Abstract: The object of the research are the processes of data transformation in an interdisciplinary digital platform that provides information interaction between subjects of economic, managerial, and scientific activities. The results of the analysis of incoming and outgoing data transformation processes in the digital platform are presented. It is shown that the processing time of transformed data in computing resources depends on the information that these data contain and which is determined by the intentions of the subjects. It is revealed that for economic agents, the parameters of the timeliness of providing information about new knowledge which is transmitted to them by scientific partners are important. The parameters of data processing time in computing resources are classified as data transformation quality parameters which are approved in the contracts of subjects with the platform operator and are taken into account in the requirements for quality control processes. For the first time, a methodological approach to quality control is proposed based on a comparative assessment of the degree of compliance of the actual values of the time parameters of data transformation with their trusted values which are approved in contracts. The practical significance of the methodological approach is to provide the functions of an objective tool for monitoring compliance with the contractual obligations of the operator of the digital platform to the interacting entities.

Keywords: digital platform; data transformation; information interaction; timeliness; trusted control

THEORY OF S-SYMBOLS: NETWORK TABS-SOLVER OF S-PROBLEMS<
  • V. D. Ilyin

Abstract: The theory of S-symbols is an extended generalization of the theory of S-modeling. It is considered as a part of the methodological support for the development of artificial intelligence systems in the S-environment (including knowledge systems, systems of S-modeling of problems and program design, etc.). The S-environment based on interconnected systems of S-(symbols, codes, signals) serves as the infrastructural basis for the implementation of information technologies for various purposes. The article presents the third part (out of four) of the description of the theory. Definitions of the concepts of a multilayer table (tabs) and tabs structures used to represent S-objects are provided. A description of the basics of tabs-construction of S-objects is given. The main part of the article is devoted to a network tabs solver of arbitrary problems based on a system of knowledge about S-problems. The methodology of its construction is considered as a contribution to the methodological support of the development of digitalization systems for various types of activities.

Keywords: theory of S-symbols; S-problem; tabs; network tabs-solver of S-problems; automation of digitalization systems development

DATA CLEANSING IN THE TECHNOLOGY OF CONCRETE HISTORICAL INVESTIGATION SUPPORT
  • I. M. Adamovich
  • O. I. Volkov

Abstract: The article continues the series of works devoted to the technology of concrete historical research supporting. The technology is based on the principles of co-creation and crowdsourcing and is designed for a wide range of users which are not professional historians and biographers. The expediency of expanding the list of concrete historical research tasks solved within the framework of the described technology using machine learning methods is shown.
The special importance of data preparation is noted due to the fragmentation and inconsistency of concrete historical information. This article is devoted to the specifics of concrete historical data cleansing and the analysis of the possibility of using mechanisms and algorithms already integrated into the technology for this purpose. The main directions in which data cleansing is carried out are listed. Suitable tools already included in the technology have been identified for each direction. Particular attention is paid to tools for eliminating inconsistencies. The stages of data cleansing are listed and the scheme of interaction of all mechanisms and algorithms described in the article is given.

Keywords: concrete historical investigation; distributed technology; machine learning; data cleansing; data inconsistency

SOCIAL EFFICIENCY OF INFORMATION TECHNOLOGIES
  • K. K. Kolin

Abstract: The problem of evaluating the effectiveness of information technologies and information systems for various purposes created on their basis is considered.
It is shown that their use can significantly improve the quality of life and, therefore, has become an integral part of the life of modern society. In addition, their use is necessary to ensure public, national, and global security. The composition of the most significant in the social aspect promising information technologies and systems for various spheres of society is given. A conceptual approach to the creation of a scientific methodology for assessing the social effectiveness of information technologies and systems is proposed. It is shown that the saving of social time which is achieved as a result of their practical use can serve as a universal criterion for such an assessment. Perspective directions of development of information technologies are considered.

Keywords: information systems; information technologies; quality of life; national security; social efficiency