The article discusses the conceptual foundations of the transformation of the fire extinguishing management system based on the theory of complex organizational systems. The author substantiates the need to move from linear-hierarchical models to adaptive and networked structures capable of providing high stability and efficiency of response in conditions of uncertainty and dynamics of emergency situations. The analysis of the compliance of the fire extinguishing system with the characteristics of a complex organizational system has been carried out, contradictions between its complex nature and primitive control mechanisms have been identified, the causes and consequences of this paradox have been identified. Multi-agent digital platforms, the use of digital twins, situation centers, as well as the use of game theory methods to optimize resource allocation and decision support are proposed as ways to solve the identified problems.
Keywords: system approach, organizational system, firefighting, network structures, management, digitalization, transformation, game theory, optimization, criteria
This article examines the impact of artificial intelligence (AI) on the speed and quality of decision-making in management processes. The key factors determining the effectiveness of using AI are analyzed, including the volume and quality of data, the complexity of models, computing resources, and the level of technology integration. Statistical data on the introduction of AI in global practice are presented: only 38% of companies are fully prepared for the effective use of AI, whereas in Russia this figure is 22%. The main obstacles are data quality (60% of global companies face problems, 75% in Russia) and lack of computing resources (35% of organizations in the world have the necessary infrastructure, 19% in Russia). The conclusions of the article emphasize the need to invest in digital infrastructure and increase the transparency of algorithms to increase confidence in AI solutions.
Keywords: artificial intelligence, decision-making, automation, digital transformation, data, interpretability, computing resources
This paper is devoted to the theoretical analysis and comparative characteristics of methods and algorithms for automatic identity verification based on the dynamic characteristics of a handwritten signature. The processes of collecting and preprocessing dynamic characteristics are considered. An analysis of classical methods, including hidden Markov models, support vector machines, and modern neural network architectures, including recurrent, convolutional, and Siamese neural networks, is conducted. The advantages of using Siamese neural networks in verification tasks under the condition of a small volume of training data are highlighted. Key metrics for assessing the quality of biometric systems are defined. The advantages and disadvantages of the considered methods are summarized, and promising areas of research are outlined.
Keywords: verification, signature, machine learning, dynamic characteristic, hidden Markov models, support vector machine, neural network approach, recurrent neural networks, convolutional neural networks, siamese neural networks, type I error
The article discusses the principles of operation, key technologies, and prospects for the development of eye-tracking systems in virtual reality (VR) devices. It highlights the main components of such systems, including infrared cameras, computer vision algorithms, and calibration methods. Eye-tracking technologies such as Pupil Center Corneal Reflection (PCCR) are analyzed in detail, as well as their integration with rendering to implement foveal rendering, which significantly reduces the load on the GPU. Current issues, including latency and power consumption, are discussed, and solutions are proposed, such as the use of predictive algorithms and hardware acceleration. Special attention is paid to promising areas, including neurointerfaces and holographic systems. The article is based on the latest research and developments from leading companies such as Tobii, Qualcomm, and Facebook Reality Labs. The article is of interest to VR device developers, researchers in the field of human-computer interaction, and computer vision specialists.
Keywords: eye-tracking, virtual reality, foveated rendering, computer vision, human-computer interaction, PCCR
This article presents a methodology for assessing damage to railway infrastructure in emergency situations using imagery from unmanned aerial vehicles (UAVs). The study focuses on applying computer vision and machine learning techniques to process high-resolution aerial data for detecting, segmenting, and classifying structural damage.
Optimized image processing algorithms, including U-Net for segmentation and Canny edge detection, are used to automate analysis. A mathematical model based on linear programming is proposed to optimize the logistics of restoration efforts. Test results show reductions in total cost and delivery time by up to 25% when optimization is applied.
The paper also explores 3D modeling from UAV imagery using photogrammetry methods (Structure from Motion and Multi-View Stereo), enabling point cloud generation for further damage analysis. Additionally, machine learning models (Random Forest, XGBoost) are employed to predict flight parameters and resource needs under changing environmental and logistical constraints.
The combination of UAV-based imaging, algorithmic damage assessment, and predictive modeling allows for a faster and more accurate response to natural or man-made disasters affecting railway systems. The presented framework enhances decision-making and contributes to a more efficient and cost-effective restoration process.
Keywords: UAVs, image processing, LiDAR, 3D models of destroyed objects, emergencies, computer vision, convolutional neural networks, machine learning methods, infrastructure restoration, damage diagnostics, damage assessment
The principles of biophilic design are becoming a key component of the architectural design of medical facilities due to the well-known psychological impact of natural elements on patients and medical staff. The inclusion of natural elements, such as the use of plants, natural light and shade, colors found in nature, and naturally occurring patterns and curves, in medical facilities has been shown to create a psychologically safe environment that promotes the health and well-being of patients and staff. This article explores the fundamental principles of biophilic design, the scientific evidence supporting its therapeutic effects, and practical examples of its use in healthcare settings to improve psychological health and well-being. This work contributes to the existing body of knowledge on biophilic design by providing an up-to-date review of recent research and real-world applications, including challenges
Keywords: biophilia, biophilic design, sustainable architecture, healthcare architecture, well-being, sustainability, biophilic architecture
The article discusses the problem of increasing the reliability of multifunctional display devices in the context of digital transformation of production processes. An approach to predictive diagnostics based on the analysis of operational and thermal characteristics of UMI using machine learning methods is proposed. The classification of UMI according to physico-technological principles and architectural levels was carried out, which made it possible to structure diagnostic models. Mathematical methods for predicting failures are considered, including logistic regression, gradient boosting (CatBoost), and residual resource estimation models. Special attention is paid to the development of the Thermal Emission-Based UMI Profiling (TEB-UP) method based on the analysis of heat maps and machine vision algorithms (PCA, autoencoders, CNN). It is shown that temperature unevenness is a sensitive indicator of degradation, ahead of traditional failure rates. The TEB-UP method demonstrates the potential for integration into monitoring and predictive maintenance systems within the framework of Industry 4.0 and 5.0 concepts.
Keywords: multifunctional display devices, predictive diagnostics, thermal profiling, residual resource, machine learning
The practice of producing optical interference coatings shows that when using new thin-film materials, obtaining optical products with specified quality function requirements depends on the accuracy of their refractive index. The results of its evaluation on large frequency crystals differ, which does not allow narrowband filters with the required technical parameters. This article proposes an approach to estimating the parameters of the refractive index of a thin film based on solving the inverse synthesis problem, which is based on the experimental determination of the thickness of sprayed films using an X-ray fluorescence coating thickness analyzer and data on the reflection coefficient spectrum obtained using a broadband spectrophotometer. The numerical modeling carried out during the study showed that even if there are 5% tolerances for estimating the thickness of coatings, a fairly accurate determination of the refractive index can be expected. The correctness of the results of using this approach was verified by using a thin film with a known refractive index, which was also determined using the proposed method of numerical modeling of the reflection spectrum of the digital twin coating.
Keywords: interference coating, numerical modeling, reflection coefficient spectrum
The article addresses a significant limitation of the classic TF-IDF method in the context of topic modeling for specialized text corpora. While effective at creating structurally distinct document clusters, TF-IDF often fails to produce semantically coherent topics. This shortcoming stems from its reliance on the bag-of-words model, which ignores semantic relationships between terms, leading to orthogonal and sparse vector representations. This issue is particularly acute in narrow domains where synonyms and semantically related terms are prevalent. To overcome this, the authors propose a novel approach: a contextual-diffusion method for enriching the TF-IDF matrix.
The core of the proposed method involves an iterative procedure of contextual smoothing based on a directed graph of semantic proximity, built using an asymmetric measure of term co-occurrence. This process effectively redistributes term weights not only to the words present in a document but also to their semantic neighbors, thereby capturing the contextual "halo" of concepts.
The method was tested on a corpus of news texts from the highly specialized field of atomic energy. A comparative analysis was conducted using a set of clustering and semantic metrics, such as the silhouette coefficient and topic coherence. The results demonstrate that the new approach, while slightly reducing traditional metrics of structural clarity, drastically enhances the thematic coherence and diversity of the extracted topics. This enables a shift from mere statistical clustering towards the identification of semantically integral and interpretable themes, which is crucial for tasks involving the monitoring and analysis of large textual data in specialized domains.
Keywords: thematic modeling, latent Dirichlet placement, TF-IDF, contextual blurring, semantic proximity, co-occurrence, text vectorization, bag of words model, thematic coherence, natural language processing, silhouette coefficient, text data analysis
The article provides an overview of the current state of password authentication and highlights the main problems. Various options for password-free authentication are being considered as a replacement for password authentication. Each option is analyzed in terms of disadvantages and the possibility of replacing passwords.
The analysis revealed that some alternatives can only act as an additional factor in multi-factor authentication, such as OTP and push notifications. Others, on the contrary, should not be used as an authentication method at all; these include QR codes.
As a result of the analysis, two directions of password-free authentication were identified as clear favorites: biometric and passkey. When comparing the finalists, the choice fell on passkey, since it does not have the main and critical drawback of biometric authentication - dependence on concealing the originals of biometrics. In case of biometrics compromise, a person gets huge problems, since without surgical intervention he cannot change it.
Passkey, on the contrary, demonstrates a high level of protection, comparable to biometrics, but is devoid of such a drawback. At the same time, passkey, or rather the current FIDO2 standard, has a few shortcomings that hinder distribution. These include the potential possibility of using malware as a client. Another, no less important problem is unlinking the old and linking a new key in case of loss or failure of the first one.
To solve this problem, it is necessary to develop a secure authentication protocol using passkey technology.
Keywords: password authentication, passwordless authentication, push notification, QR-code, biometric authentication, passkey, FIDO2, WebAuthn, CTAP2.1
The paper proposes a method to counteract unauthorized privilege escalation in the Android operating system. The proposed method involves using the ARM architecture’s hardware virtualization technology to control access to to the operating system’s kernel data structures that store task identification information.
Keywords: information security, privilege escalation, Android, hypervisor
The article describes an experiment on designing a neural network for a programmable logic controller in order to eliminate the need to involve programmers in the development of automated control systems. The main task of programmable logic controllers is to simplify the automation of technological processes; they practically eliminate the tasks of developing printed circuit boards and soldering operations of elements. Obviously, the fewer different tasks that have to be solved and the simpler these tasks are, the faster the development and launch of a new system will be, and its cost will be lower. For the same purpose, for programming controllers, fairly simple and visual languages are used, this greatly simplifies the work of programmers. At the current level of microelectronics development, the computing resources of controllers significantly exceed the level necessary for most automation tasks. The question certainly arises whether it is possible, using excess computing power, to develop a single universal program capable of adapting to any technological process. Surely, such a program will work slower and take up more memory, but in this case, the programming task should degenerate into the task of setting up ready-made software. The article is devoted to the development of a prototype of such a program based on the single-layer perceptron model. The structure and parameters of the developed neural network are described taking into account the characteristics of the target platform. The training process of the designed neural network is analyzed. The limitations imposed on the development are listed and substantiated. The advantages and disadvantages, as well as development options, are outlined.
Keywords: programmable logic controller, artificial neural network, single-layer perceptron, relay logic language, automated process control system
The paper is devoted to the search for an effective decoding method for a new class of binary erasure-correcting codes. The codes in question are set by an encoding matrix with restrictions on column weights (MRSt codes). To work with the constructed codes, a decoder based on information aggregates and a decoder based on the belief propagation are used, adapted for the case of erasures. Experiments have been carried out to determine the decoding speed and correcting ability of these methods in relation to the named classes of noise-resistant codes. In the case of MRSt codes, the decoder, based on the principle of spreading trust, significantly benefits in speed compared to the decoder for information aggregates, but loses slightly in terms of corrective ability.
Keywords: channels with erasure, distributed fault-tolerant data storage systems, code with equal-weight columns, decoder based on information aggregates, decoder based on the belief propagation, RSt code, MRSt code
The article forms the task of hierarchical classification of texts, describes approaches to hierarchical classification and metrics for evaluating their work, examines in detail the local approach to hierarchical classification, describes different approaches to local hierarchical classification, conducts a series of experiments on training local hierarchical classifiers with various vectorization methods, compares the results of evaluating the work of trained classifiers.
Keywords: classification, hierarchical classification, local classification, hierarchical presicion, hierarchical recall, hierarchical F-measure, natural language processing, vectorization
Modern approaches to synthetic speech recognition are in most cases based on the analysis of specific acoustic, spectral, or linguistic patterns left behind by speech synthesis algorithms. An analysis of open sources has shown that the further development of methods and algorithms for synthetic speech recognition is crucial for providing protection against emerging threats and maintaining trust in existing biometric systems.
This paper proposes an algorithm for synthetic speech detection based on the calculation of audio signal entropy. The relevance of the work is driven by the increasing number of cases involving the malicious use of synthetic speech, which is becoming almost indistinguishable from genuine human speech. The results demonstrated that the entropy of synthetic speech is significantly higher, and the algorithm is robust to data losses. The advantages of the algorithm are its interpretability and low computational complexity. Experiments were conducted on the CMU ARCTIC dataset using the XTTS v.2 model. The proposed algorithm enables making a decision on the presence of synthetic speech without the need for complex spectral analysis or machine learning methods.
Keywords: synthetic speech, spoofing, Shannon entropy, speech recognition
The work is devoted to the application of a linear Kalman filter (KF) for estimating the roll angle of a quadcopter with structural asymmetry, under which the control input contains a nonzero constant component. This violates the standard assumption of zero mathematical expectation and reduces the efficiency of traditional KF implementations. A filter synthesis method is proposed based on the optimization of the covariance matrices ratio using a criterion that accounts for both the mean square error and the transient response time. The effectiveness of the approach is confirmed by simulation and experimental studies conducted on a setup with an IMU-6050 and an Arduino Nano. The obtained results demonstrated that the proposed Kalman filter provides improved accuracy in estimating the angle and angular velocity, thereby simplifying its tuning for asymmetric dynamic systems.
Keywords: Kalman filter, quadcopter with asymmetry, optimization of covariance matrices, functional with mean square error and process time, complementary filter, roll and pitch control
Разработан алгоритм и составлена программа на языке программирования Python для расчета численных значений оптимального оператора фильтрации с запаздыванием для L-марковского процесса с квазирациональной спектральной плотностью, являющегося обобщением марковского процесса с рациональным спектром. В основе построения оптимального оператора фильтрации с запаздыванием лежит спектральная теория случайных процессов. Расчетная формула оператора фильтрации была получена с использованием теории L-марковских процессов, методов вычисления стохастических интегралов, теории функций комплексного переменного и методов тригонометрической регрессии. Рассмотрен интересный с точки зрения управления сложными стохастическими системами пример L-марковского процесса (сигнала) с квазирациональным спектром. За основу при построении математической модели оптимального оператора фильтрации с запаздыванием была взята тригонометрическая модель. Показано, что значения оператора фильтрации с запаздыванием представляются линейной комбинацией значений принимаемого сигнала в определенные моменты времени и значений синусоидальных и косинусоидальных функций в те же моменты. Установлено, что числовые значения оператора фильтрации существенно зависят от параметра β совместной спектральной плотности принимаемого и передаваемого сигналов, в связи с чем в работе рассматривались три разные задачи прохождения сигнала через разные физические среды. Установлено, что абсолютная величина действительной части оператора фильтрации на всех трех интервалах изменения срока запаздывания и во всех трех средах превышает абсолютную величину мнимой части в среднем в два и более раз. Построены графики зависимости действительных и мнимых частей оператора фильтрации от срока запаздывания τ, а также трехмерные графики зависимости самого оператора фильтрации с запаздыванием от срока запаздывания. Дано физическое обоснование полученным результатам.
Keywords: random process, L-Markov process, noise, delayed filtering, spectral characteristic, filtering operator, trigonometric trend, standardized approximation error
A mathematical model has been constructed, an algorithm has been developed, and a program has been written in the Python programming language for calculating the numerical values of the optimal filtering operator with a forecast for an L-Markov process with a quasi-rational spectrum. The probabilistic model of the filtering operator formula has been obtained based on the spectral analysis of L-Markov processes using methods for calculating stochastic integrals, the theory of analytical functions of a complex variable, and methods for correlation and regression analysis. Considered an example of L-Markov process, the values of the optimal filtering operator with a forecast for which it was possible to express in the form of a linear combination of the values of the process at some moments of time and the sum of numerical values of cosines and sines at the same moments. The basis for obtaining the numerical values of the filtering operator was the mathematical model of trigonometric regression with 16 harmonics, which best approximates the process under study and has a minimum
Keywords: random process, L-Markov process, prediction filtering, spectral characteristics, filtering operator
The article describes the process of developing a volumetric display for information and communication interaction in the Arctic, where traditional means of visualization and communication face the challenges of extreme climate, isolation and limited infrastructure. An analysis of the main areas of using volumetric in the Arctic zone is carried out. The main disadvantages of methods for creating a volumetric image in existing 3D displays are considered. Taking into account the main tasks to be solved - creating the illusion of a three-dimensional object for a group of people (more than 2 people) at a wide viewing angle - a description and analysis of two main developed configurations of the optical system is given, the latter of which meets the requirements, ensuring stable operation in Arctic conditions and opening up prospects for implementation in remote and hard-to-reach regions of the Far North.
Keywords: volume display, arctic zone, 3D image, system analysis, lens, optical system, computer modeling
The paper considers the synthesis of a non-stationary automatic control system for braking the wheels of a heavy vehicle using the generalized Galerkin method. The research method under consideration is used to solve the problem of synthesizing a non-stationary system whose desired program motion is specified at the output of a nonlinear element. The paper presents the results of studying the impact of non-stationarity on the parameters of the fixed part of the system (object) on the deterioration of the quality of the transient process. For critical operating conditions, the parameters of the controller were recalculated, and the results of accounting for non-stationarity and re-synthesis were evaluated.
Keywords: automatic control system, regulator, braking system, unsteadiness of parameters, generalized Galerkin method
The paper deals with a problem of assessing the level of security of critical information infrastructure objects in the financial sector based on organizational structure and management factors in the context of internal audit. Standards do not allow flexible assessment of indicators characterizing information security requirements and propose to obtain expert assessments based on subjectively selected elements (documents, facts) related to certain requirements. The article considers a Bayesian approach to assessing the values of private indicators for all available characteristics of information security requirements, which allows obtaining them on a continuous scale. A corresponding model is presented that includes the calculation of private and generalized indicator values. It improves the approach to assessing the level of security of critical information infrastructure objects during internal audit, as defined by standards, from the point of view of assessing private indicator values on a continuous scale and taking into account the influence of the history of changes in the characteristics of information security requirements.
Keywords: information security, Bayesian approach, critical information infrastructure objects, indicators of compliance with information security requirements, level of protection of objects, model with probabilistic components
This study analyzes the performance of Reed-Solomon codes (RS codes) using the MATLAB software environment. RS codes are selected as a class of error-correcting codes characterized by high performance under multiple burst errors, which makes them widely applicable in areas such as digital television, data storage (CD/DVD, flash memory) and wireless communication. The paper demonstrates and evaluates the performance of RS codes in practice through their simulation in MATLAB. The study covers the creation of simulation models for encoding, error insertion and decoding data using RS algorithms in MATLAB. The performance of the codes is evaluated by calculating the bit error rate (BER) and other relevant metrics. The influence of key parameters of RS codes (e.g., codeword length, number of check symbols) on their error-correcting ability is analyzed. The results of the study are intended to clearly show how RS codes cope with different types of errors and how their performance can be optimized by tuning the parameters. The work highlights the importance of MATLAB as a tool for developing, testing and optimizing coding systems, providing practical tools for researchers and engineers.
Keywords: Reed-Solomon codes, MATLAB, error correction, simulation, performance, error probability, communication systems, data storage