The paper provides an overview of research on the integration of evolutionary game theory (EGT) and multi-agent reinforcement learning (MARL). The main problems of MARL and the corresponding advantages of EGT are analyzed. As a result of the analysis, it was found that the implementation of EGT can effectively solve the problems of instability, credit allocation and partial observability in MARL, providing stable strategic convergence and a new path for group optimization. It is shown that the integration of EGT and MARL forms a promising theoretical and technical basis for a breakthrough in multi-agent control. At the same time, in order to deeply merge the two directions, integration mechanisms will have to be optimized in the future, more reliable algorithms will have to be developed, and applied research in complex heterogeneous systems will have to be strengthened.
Keywords: evolutionary game theory, multi-agent reinforcement learning, multi-agent control, instability, credit allocation, partial observability
In recent years, the safe operation of energy facilities has increasingly been ensured by probabilistic non-destructive testing systems. This article examines a method for predicting and estimating the number of missed defects by solving an inverse problem. A detailed analysis of indirect manifestations and prediction of an indirect parameter is conducted using the Keras deep learning library, which determines the quantitative characteristics of the facility under study. The results of the study demonstrate encouraging prediction accuracy with easily correctable signs of model overfitting.
Keywords: non-destructive testing, defects, defect detection probability distribution curves, synthetic data for deep learning, regression forecasting, Keras, structural and semantic features, non-linear dependencies
The article describes an experiment on the compilation of a training sample, training and testing of a neural network model of a computer vision system for detecting burns of a tundish nozzle at a continuous steel casting plant. The issue of validity of augmentation of data for training is considered. The obtained results are analyzed.
Keywords: computer vision, object detection, dataset, augmentation, steelmaking, continuous steel casting, burnout of a tundish nozzle
This article addresses the challenge of building Android applications within secure, network-isolated environments where no direct internet connection is available. The primary objective is to develop a reliable method for the continuous integration and delivery (CI/CD) of Android artifacts under these constraints. The proposed solution methodologically integrates Docker containerization to ensure a standardized build environment with the Nexus Repository Manager for creating a comprehensive local mirror of all external dependencies, such as those from Google Maven. This local repository cache is then made accessible inside the isolated network via a configured nginx proxy server. The implemented system successfully enables a complete and automated Android build pipeline, entirely eliminating the need for external access during compilation. The results demonstrate significant enhancements in security by mitigating risks associated with public repositories, while also ensuring build stability, reproducibility, and protection against upstream outages. In conclusion, this approach provides a practical and robust framework for secure mobile application development in high-security or restricted corporate network infrastructures.
Keywords: docker, containerization, android, flutter, ci/cd, nginx, proxying, network isolation, application building.
The design of automated control systems for the physical protection of facilities is one of the most sought-after area in the development of domestic software products. The article presents the architecture of a hardware-software system, an assessment of the development tools required to implement a web application based on the Astra Linux operating system, and a description of an experiment to create a system prototype. The following tools were used to build the system: the Angular framework for the client layer; the FastAPI framework, the SQLAlchemy library, and the WebSocket protocol for the server layer; and the object-relational PostgreSQL database management system for data storage. The result of the work is a technical means control system that demonstrates interaction with devices and the database. The implemented prototype will serve as a basis for developing a hardware-software complex for the physical protection of a facility.
Keywords: domestic operating system, web application, development tools, management system, database, sensor, monitoring
The article discusses the problem of feature selection when training machine learning (ML) models in the task of identifying fake (phishing) websites. As a solution, a set of key metrics is proposed: efficiency, reliability, fault tolerance, and retrieval speed. Efficiency measures impact of feature to prediction accuracy. Reliability measures how well feature distinct phishing from legitimate. Fault tolerance score measures empirical probability of feature to be valid and fulfilled. And retrieval speed is logarithmic time of feature extraction. This approach allows for the ranking of features into categories and their subsequent selection for training machine learning models, depending on the specific domain and constraints. In this article, 82 features was measured, and 6 fully-connected neural networks was trained to evaluate the effectiveness of metrics. Experiments has shown that proposed approach can increase the accuracy of models by 1-3%, precision by 0.03, and significantly reduce overall extraction time and so improve response rate.
Keywords: feature evaluation method, machine learning model, identification of phishing websites, metric, efficiency, reliability, fault tolerance, and retrieval speed
This article proposes a hybrid method for speckle noise reduction in radar images based on a combination of the wavelet transform and the U-Net neural network (NN) architecture with enhancement of low-frequency components in high-frequency subbands. The wavelet transform decomposes the radar images into frequency subbands, allowing noise to be localized primarily in high-frequency components. These components are processed using a U-Net neural network, whose effectiveness stems from its symmetric structure and skip connections, which allow for the accurate preservation and restoration of important image details. Furthermore, enhancing the low-frequency component in high-frequency subbands to improve the signal-to-noise ratio allows the neural network to more accurately separate useful signal structures from the noise. The combined approach demonstrates high speckle noise reduction efficiency with minimal loss of structural information, outperforming traditional methods in terms of restoration quality and image clarity.
Keywords: speckle noise, noise reduction, wavelet transform, neural networks, U-Net, neural networks, frequency subbands
The article presents a hybrid neural network for estimating the mass of a car and the longitudinal/transverse slopes of a road, combining a square-root sigma-point Kalman filter and a neural network model based on a transformer encoder using cross-attention to the evaluation residuals. The proposed approach combines the physical interpretability of the filter with the high approximation capability of the neural network. To ensure implementation on embedded electronic control units, the model was simplified by converting knowledge into a compact network of long-term short-term memory. The results of experiments in various scenarios showed a reduction in the average error by more than 25% with a computational delay of less than 0.3 ms.
Keywords: vehicle condition assessment, road slope assessment, vehicle mass assessment, transformer neural network, cross-focus, adaptive filtering, knowledge distillation, square-root sigma-dot Kalman filter, intelligent vehicles, sensor fusion
The article is devoted to the method of formalizing indicators of compromise (IoC) using a Bayesian approach to classify and rank them based on probabilistic inference. The problem of detecting malicious indicators from a large volume of data found in various sources of threat information is critically important for assessing modern cybersecurity systems. Traditional heuristic approaches, based on simple aggregation or expert evaluation of IoCs, do not provide sufficient formalization and further ranking of their reliability regarding their association with a particular malicious campaign due to the incompleteness and uncertainty of the information received from various sources.
Keywords: indicators of compromise (IoC), Bayesian inference, cyber threats, probabilistic models, malicious activity analysis, threat intelligence, IoC classification, multi-source analysis
The integration of artificial intelligence into mobile devices is fraught with serious challenges, especially due to the limited resources available and the requirements for real-time data processing. The article discusses modern approaches to reducing computing costs and resources in systems for mobile objects with artificial intelligence, including model optimization, and computing allocation strategies for mobile platforms with limited resources.
Keywords: artificial intelligence, moving objects, lightweight models, peripheral models, hardware acceleration, knowledge distillation, quantization
The article discusses modern approaches to forecasting and detecting forest fires using machine learning technologies and remote sensing data. Special attention is paid to the use of computer vision algorithms, such as convolutional neural networks and transformers, to detect and segment fires in images from unmanned aerial vehicles. The high efficiency of hybrid architectures and lightweight models for real-time operation is noted.
Keywords: forest fires, forecasting, unmanned aerial vehicles, deep learning, convolutional neural networks, transformers, image segmentation
A method of applying mathematical analysis and machine learning to organize predictive maintenance of an electric motor is considered using the example of the AIMU 112 MV6 U1 electric motor. A comprehensive technique for diagnosing the technical condition of an electric motor based on the analysis of vibration signals recorded by a three-axis accelerometer is proposed, which can be adapted to monitor the condition of various types of rotating equipment in industrial conditions.
Keywords: predictive maintenance, electric motor, vibration analysis, machine learning, neural networks, fault diagnosis, accelerometer, condition classification
The article presents an analysis of the current state of the monitoring process in spacecraft (SC) flight control. It outlines the state of monitoring technologies currently used in the flight control of modern spacecraft. The shortcomings of the monitoring process, which are exacerbated by the development of space technology, are identified. To address these shortcomings, the use of new intelligent methods is proposed, which, by increasing the automation of the spacecraft flight monitoring process, will enhance the reliability and efficiency of control. Promising methods for improving the reliability of spacecraft monitoring using artificial intelligence technologies, in particular artificial neural networks (ANN), are considered.
An analysis of scientific publications on the application of ANNs in space technology was conducted; examples of ANN application in flight control, diagnostics, and data processing tasks are provided. The advantages and limitations of using neural networks in space technology are examined.
Keywords: spacecraft, flight control, monitoring, state analysis, flight management, telemetry data
The article presents a systematic study of information flows in the "application-DBMS" link and proposes a comprehensive model of protection against SQL injections based on multi-level analysis. The system analysis considers the full cycle of query processing, which allows overcoming the fragmentation of existing approaches. The limitations of existing methods based on signature analysis, machine learning, and syntax validation are analyzed. To improve the reliability and accuracy of detection, a new combined method is proposed that integrates static syntax analysis of abstract syntax trees (AST) of queries with dynamic behavioral analysis of sessions. A key feature of the syntax module is the application of the Jaccard coefficient to assess the structural similarity of paths in the AST, which ensures the efficient detection of polymorphic injections. The behavioral module analyzes the temporal and statistical patterns of the query sequence, which allows.
Keywords: SQL injections, system analysis, machine learning, parsing, abstract syntax tree, behavioral analysis, Jaccard coefficient, polymorphic attacks, time-based attacks
This article describes a method for filtering speckle noise from synthetic aperture radar images using wavelet transforms and nonlocal means. The proposed method utilizes a spatial-frequency representation of the image and analyzes the similarity of local wavelet coefficient structures in subbands. Experimental results demonstrate that the developed method outperforms some known methods in terms of metrics such as mean square error, peak signal-to-noise ratio, and structural similarity index, as well as subjective visual assessment. The method effectively filters speckle while preserving fine details, contrasting edges, and correctly restoring background brightness without introducing noticeable artifacts.
Keywords: radar image, synthetic aperture radar, speckle noise, image filtering, wavelet transform, thresholding, non-local means
This work presents the Multi-Agent Coverage Controller (MACC)—a specialized deep reinforcement learning method designed to solve the coverage path planning problem in multi-agent systems. The method addresses key challenges inherent to coverage path planning, including sparse and noisy rewards, high gradient variance, the difficulty of credit assignment among agents, and the need to scale to a variable number of agents. MACC integrates a specific set of mechanisms: an adaptive clipping-interval width, advantage-modulation gating, a counterfactual baseline for the centralized critic, and a multi-head self-attention mechanism with a presence mask. Theoretical properties of the method are provided, demonstrating optimization stability and reduced variance of gradient estimates. A comprehensive ablation study is conducted, showing the contribution of each mechanism to agent coordination, spatial distribution of trajectories, and overall coverage speed. Experiments on a set of satellite maps indicate that MACC achieves substantial improvements in coverage completeness and speed compared to the baseline configuration, delivering the best results when all integrated mechanisms are used jointly.
Keywords: multi-agent system, coverage path planning, deep reinforcement learning, adaptive trim interval width, modulated advantage gateway, counterfactual basis, multi-head self-awareness mechanism, agent coordination
The paper presents a methodology for addressing the scarcity of labeled industrial data for training deep neural networks for semantic segmentation. A platform is proposed for synthetic generation of training point cloud datasets based on a minimal number of real laser-scanning samples of mechanical, electrical, and plumbing networks. The algorithm includes detecting the axes of cylindrical elements using the Random Sample Consensus method, constructing perpendicular joint planes, and applying affine transformations to create assemblies of 2–7 elements. The training set is increased from 8 real scans to more than 800 synthetic examples, which makes it possible to improve the segmentation accuracy of the PointNet++ deep hierarchical point cloud learning architecture from 72% to 89% in terms of the Intersection over Union (IoU) metric. The developed system enables automated creation of BIM models of engineering infrastructure with 90–95% accuracy with respect to design parameters.
Keywords: synthetic data generation, point clouds, semantic segmentation, laser scanning, Random Sample Consensus method, shortage of labeled data, BIM modeling, engineering networks, deep learning
An algorithm for modeling smooth hysteresis nonlinearities is proposed, taking into account the slope coefficient k and the saturation level c. The developed model provides accuracy and ease of adjustment while maintaining intuitive physical parameters of the hysteresis loop, which makes it effective for practical application in the tasks of analysis and synthesis of nonlinear control systems.
Keywords: unambiguous nonlinearities, hysteresis, automatic control systems, backlash with saturation, ambiguous nonlinearities, algorithm, modeling of automatic control systems, relay, static characteristic, approximation
The paper proposes a modified method for the parametric synthesis of nonlinear automatic control systems with ambiguous nonlinearities based on the generalized Galerkin method. To eliminate the need to determine switching points, the transient process is approximated by two sections: a polynomial before the time tp and a constant value after it. This allows us to obtain recurrent formulas for Bqi integrals, simplify calculations, and maintain the absolute stability of the system.
Keywords: parametric synthesis, nonlinear automatic control systems, generalized Galerkin method, ambiguous (multi-valued) nonlinearities, hysteresis, switching points, polynomial approximation, impulse automatic control systems, recurrent relations
The study addresses the problem of reducing the probability of emergency situations involving unmanned aerial systems. Accidents are regarded as outcomes of combinations of events that are individually of relatively low hazard. Causal relationships are represented by fault trees, in which the root node corresponds to an accident, the leaves correspond to basic events, and intermediate nodes describe their logical combinations. Accident scenarios are associated with the minimal cut sets of the fault tree. To identify accident prevention strategies, the concept of successful-operation paths is employed. Each such path is defined as a set of nodes having a non-empty intersection with all minimal cut sets. It is assumed that preventing all events included in a successful-operation path renders the development of accident scenarios impossible.
The study additionally accounts for the fact that the same flight mission may be executed along routes of differing complexity. Route complexity influences the cost estimates of measures aimed at preventing the events that form accident-related combinations. A model example is provided that includes the assessment of the complexity of two routes, a table of mitigation costs for basic events, and the selection of an appropriate successful-operation path for accident prevention. The proposed methodology is intended to reduce the probability of the realization of accident-inducing event combinations to an acceptable level while adhering to operational constraints and the mission-specific requirements of the flight task.
Keywords: unmanned aerial systems, emergency combination of events, fault tree, route complexity, logical-probabilistic security analysis
This article analyzes the limitations of the standalone use of Kalman filters in complex dynamic systems and systematizes modern advances in the integration of deep learning methods. Practical aspects of the combined application of deep learning and Kalman filters are explored, demonstrating improved accuracy and reliability of solutions under dynamic conditions, noisy environments, and complex environmental factors. Finally, promising directions for the development of multisensor data fusion methods are outlined.
Keywords: deep learning, integrated navigation, multi-source data fusion, Kalman filter, extended Kalman filter
The article discusses modern methods for protecting bank customers' personal information based on differential anonymization of data using trusted neural networks. It provides an overview of the regulatory framework, analyzes technological approaches and describes a developed multi-level anonymization model that combines cryptographic and machine learning techniques. Special attention is paid to balancing between preserving data utility and minimizing the risk of customer identity disclosure.
Keywords: differential anonymization, trusted neural network, personal data, banking technologies, information security, cybersecurity
A reproducible method is presented for the autonomous determination of the coordinates of the base stations of fourth-generation mobile radio networks and the parameters of their sectors, based solely on field observations of the modem without using time delay and methods for estimating the angles of arrival of the signal. The approach combines robust allocation of an informative "core" of measurements, weighted minimization of distances and aggregation at the site level, providing stable estimates in urban environments. Experimental verification in a real scene demonstrates a significant reduction in localization error compared to the basic centroid and median methods, which confirms the practical applicability of the proposed solution.
Keywords: LTE, positioning, localization, base station, site coordinate, signal strength, angular distribution, sectorization, optimization, observation, geometric median, field recording, minimization method, radio signal
The work compares the use of recurrent networks and models based on transformer architecture to solve the problem of predicting the completion time of a business process. Models, by definition, model the sequence of actions and are able to take into account a variety of attributes in determining target characteristics. For comparison, a recurrent model of long-term short-term memory and a transformer encoder of its own architecture were used, the operation of which was tested on openly presented real data from the logs of the support service. The models were trained and tested using Python using the pandas, numpy, and torch libraries with the same data preparation, prefix generation, and time division for both models. The comparison as a result of experiments on the average absolute error showed the advantage of the transformer encoder; approximately the same accuracy of the models with a slightly higher accuracy of the transformer model was recorded for the standard deviation.
Keywords: predictive monitoring, event log, machine learning, transformer encoder, neural networks, data preparation, regression model, normalization, padding, recurrent network, model architecture
The study addresses the problem of identifying key points on three-dimensional surfaces of physical objects that are required for placement or fixation of elements in engineering and medical applications. Direct determination of such points on real objects is limited by restricted access, geometric variability, and high accuracy requirements; therefore, digital 3D models incorporating both external and internal structure are used as the basis for analysis. An interpretable multi-criteria suitability evaluation approach is proposed, based on fuzzy logic inference, enabling integration of strict constraints and preferential criteria originating from different subject domains. The methodological framework combines systematic literature analysis, expert knowledge integration, and mathematical formalization of multi-criteria decision making. Particular attention is given to explainability and transferability, which are critical for medical scenarios (anatomical landmarks) as well as engineering and robotics scenarios (geometric and technological landmarks). The developed model generates suitability heatmaps and automatically identifies a set of admissible points that is consistent with expert annotations and safety requirements.
Keywords: key point detection; fuzzy logic inference; decision support expert system