The article presents a comparative analysis of the performance of three solver programs (based on the libraries lpSolve, Microsoft Solver Foundation and Google OR-Tools) when solving a large-dimensional linear Boolean programming problem. The study was conducted using the example of the problem of identifying the parameters of a homogeneous nested piecewise linear regression of the first type. The authors have developed a testing methodology that includes generating test data, selecting hardware platforms, and identifying key performance metrics. The results showed that Google OR-Tools (especially the SCIP solver) demonstrates the best performance, surpassing analogues by 2-3 times. The Microsoft Solver Foundation has shown stable results, while the lpSolve IDE has proven to be the least productive, but the easiest to use. All solvers provided comparable accuracy of the solution. Based on the analysis, recommendations are formulated for choosing a solver depending on performance requirements and integration conditions. The article is of practical value for specialists working with optimization problems and researchers in the field of mathematical modeling.
Keywords: regression model, homogeneous nested piecewise linear regression, parameter estimation, method of least modules, linear Boolean programming problem, index set, comparative analysis, software solvers, algorithm performance, Google OR-Tools
This research investigates the development of expert systems (ES) based on large language models (LLMs) enhanced with augmented generation techniques. The study focuses on integrating LLMs into ES architectures to enhance decision-making processes. The growing influence of LLMs in AI has opened new possibilities for expert systems. Traditional ES require extensive development of knowledge bases and inference algorithms, while LLMs offer advanced dialogue capabilities and efficient data processing. However, their reliability in specialized domains remains a challenge. The research proposes an approach combining LLMs with augmented generation, where the model utilizes external knowledge bases for specialized responses. The ES architecture is based on LLM agents implementing production rules and uncertainty handling through confidence coefficients. A specialized prompt manages system-user interaction and knowledge processing. The architecture includes agents for situation analysis, knowledge management, and decision-making, implementing multi-step inference chains. Experimental validation using YandexGPT 5 Pro demonstrates the system’s capability to perform core ES functions: user interaction, rule application, and decision generation. Combining LLMs with structured knowledge representation enhances ES performance significantly. The findings contribute to creating more efficient ES by leveraging LLM capabilities with formalized knowledge management and decision-making algorithms.
Keywords: large language model, expert system, artificial intelligence, decision support, knowledge representation, prompt engineering, uncertainty handling, decision-making algorithms, knowledge management
The paper provides a comparative analysis of the accuracy of determining the coordinates of an aircraft using the classical correlation extreme algorithm (CEA) and the machine irradiation method based on a fully convolutional neural network (FCN) based on terrain maps. Two-dimensional correlated random functions are used as relief models. It has been shown that CEA is effective with small amounts of data, whereas FCN demonstrates high noise immunity after training on representative samples. Both methods showed the dependence of the accuracy of determining the coordinates of the aircraft on the size of the reference area, the number of standards, entropy, and the correlation coefficient of the random relief.
Keywords: correlation-extreme algorithm, deep learning, convolutional neural network, aircraft guidance, digital terrain model, Fourier filtering, spatial correlation, noise immunity, algorithm comparison, autonomous navigation, hybrid systems, terrain entropy
This paper is devoted to the theoretical analysis and comparative characteristics of methods and algorithms for automatic identity verification based on the dynamic characteristics of a handwritten signature. The processes of collecting and preprocessing dynamic characteristics are considered. An analysis of classical methods, including hidden Markov models, support vector machines, and modern neural network architectures, including recurrent, convolutional, and Siamese neural networks, is conducted. The advantages of using Siamese neural networks in verification tasks under the condition of a small volume of training data are highlighted. Key metrics for assessing the quality of biometric systems are defined. The advantages and disadvantages of the considered methods are summarized, and promising areas of research are outlined.
Keywords: verification, signature, machine learning, dynamic characteristic, hidden Markov models, support vector machine, neural network approach, recurrent neural networks, convolutional neural networks, siamese neural networks, type I error
The article discusses the principles of operation, key technologies, and prospects for the development of eye-tracking systems in virtual reality (VR) devices. It highlights the main components of such systems, including infrared cameras, computer vision algorithms, and calibration methods. Eye-tracking technologies such as Pupil Center Corneal Reflection (PCCR) are analyzed in detail, as well as their integration with rendering to implement foveal rendering, which significantly reduces the load on the GPU. Current issues, including latency and power consumption, are discussed, and solutions are proposed, such as the use of predictive algorithms and hardware acceleration. Special attention is paid to promising areas, including neurointerfaces and holographic systems. The article is based on the latest research and developments from leading companies such as Tobii, Qualcomm, and Facebook Reality Labs. The article is of interest to VR device developers, researchers in the field of human-computer interaction, and computer vision specialists.
Keywords: eye-tracking, virtual reality, foveated rendering, computer vision, human-computer interaction, PCCR
This article presents a methodology for assessing damage to railway infrastructure in emergency situations using imagery from unmanned aerial vehicles (UAVs). The study focuses on applying computer vision and machine learning techniques to process high-resolution aerial data for detecting, segmenting, and classifying structural damage.
Optimized image processing algorithms, including U-Net for segmentation and Canny edge detection, are used to automate analysis. A mathematical model based on linear programming is proposed to optimize the logistics of restoration efforts. Test results show reductions in total cost and delivery time by up to 25% when optimization is applied.
The paper also explores 3D modeling from UAV imagery using photogrammetry methods (Structure from Motion and Multi-View Stereo), enabling point cloud generation for further damage analysis. Additionally, machine learning models (Random Forest, XGBoost) are employed to predict flight parameters and resource needs under changing environmental and logistical constraints.
The combination of UAV-based imaging, algorithmic damage assessment, and predictive modeling allows for a faster and more accurate response to natural or man-made disasters affecting railway systems. The presented framework enhances decision-making and contributes to a more efficient and cost-effective restoration process.
Keywords: UAVs, image processing, LiDAR, 3D models of destroyed objects, emergencies, computer vision, convolutional neural networks, machine learning methods, infrastructure restoration, damage diagnostics, damage assessment
The article considers mathematical and computer modeling of noise characteristics of strain-resistant pressure sensors. A model has been developed that takes into account thermal, shotgun, flicker, and process noise, which form the total output signal of the sensor. Based on a numerical experiment, spectral analysis by the Welch method was performed, the slope of the 1/𝑓 region was approximated, and the integrated noise powers in various frequency ranges were calculated. It is shown that flicker noise dominates in the low-frequency range, thermal noise dominates in the medium and high-frequency ranges, and drift noise components manifest themselves near the zero frequency. Analysis of the signal-to-noise ratio revealed its decrease at low frequencies and stabilization at frequencies above 1 kHz. The results obtained confirmed the adequacy of the model and its applicability for predicting noise characteristics, manufacturing quality, and optimizing the operating conditions of pressure sensors.
Keywords: tensoresistive sensors, noise characteristics, thermal noise, flicker noise, shot noise, power spectral density, signal-to-noise ratio, computer simulation
The practice of producing optical interference coatings shows that when using new thin-film materials, obtaining optical products with specified quality function requirements depends on the accuracy of their refractive index. The results of its evaluation on large frequency crystals differ, which does not allow narrowband filters with the required technical parameters. This article proposes an approach to estimating the parameters of the refractive index of a thin film based on solving the inverse synthesis problem, which is based on the experimental determination of the thickness of sprayed films using an X-ray fluorescence coating thickness analyzer and data on the reflection coefficient spectrum obtained using a broadband spectrophotometer. The numerical modeling carried out during the study showed that even if there are 5% tolerances for estimating the thickness of coatings, a fairly accurate determination of the refractive index can be expected. The correctness of the results of using this approach was verified by using a thin film with a known refractive index, which was also determined using the proposed method of numerical modeling of the reflection spectrum of the digital twin coating.
Keywords: interference coating, numerical modeling, reflection coefficient spectrum
Modern approaches to synthetic speech recognition are in most cases based on the analysis of specific acoustic, spectral, or linguistic patterns left behind by speech synthesis algorithms. An analysis of open sources has shown that the further development of methods and algorithms for synthetic speech recognition is crucial for providing protection against emerging threats and maintaining trust in existing biometric systems.
This paper proposes an algorithm for synthetic speech detection based on the calculation of audio signal entropy. The relevance of the work is driven by the increasing number of cases involving the malicious use of synthetic speech, which is becoming almost indistinguishable from genuine human speech. The results demonstrated that the entropy of synthetic speech is significantly higher, and the algorithm is robust to data losses. The advantages of the algorithm are its interpretability and low computational complexity. Experiments were conducted on the CMU ARCTIC dataset using the XTTS v.2 model. The proposed algorithm enables making a decision on the presence of synthetic speech without the need for complex spectral analysis or machine learning methods.
Keywords: synthetic speech, spoofing, Shannon entropy, speech recognition
An algorithm has been developed and a program has been compiled in the Python programming language for calculating numerical values of the optimal lagged filtering operator for an L-Markov process with quasi-rational spectral density, which is a generalization of the Markov process with a rational spectrum. The construction of an optimal delayed filtering operator is based on the spectral theory of random processes. The calculation formula of the filtration operator was obtained using the theory of L-Markov processes, methods for calculating stochastic integrals, the theory of functions of a complex variable, and methods of trigonometric regression. An example of an L-Markov process (signal) with a quasi-rational spectrum is considered, which is interesting from the point of view of controlling complex stochastic systems. The trigonometric model was used as the basis for constructing a mathematical model of the optimal delayed filtration operator. It is shown that the values of the delayed filtering operator are represented by a linear combination of the values of the received signal at certain time points and the values of the sinusoidal and cosine functions at the same time points. It is established that the numerical values of the filtering operator significantly depend on the parameter β of the joint spectral density of the received and transmitted signals, and therefore three different tasks of signal transmission through different physical media were considered in the work. It is established that the absolute value of the real part of the filtration operator at all three intervals of the delay period change and in all three media exceeds the absolute value of the imaginary part by an average of two or more times. Graphs of the dependence of the real and imaginary parts of the filtration operator on the delay period t are constructed, as well as three-dimensional graphs of the dependence of the filtration operator itself with a delay on the delay period. The physical justification of the obtained results is given.
Keywords: random process, L-Markov process, noise, delayed filtering, spectral characteristic, filtering operator, trigonometric trend, standardized approximation error
A mathematical model has been constructed, an algorithm has been developed, and a program has been written in the Python programming language for calculating the numerical values of the optimal filtering operator with a forecast for an L-Markov process with a quasi-rational spectrum. The probabilistic model of the filtering operator formula has been obtained based on the spectral analysis of L-Markov processes using methods for calculating stochastic integrals, the theory of analytical functions of a complex variable, and methods for correlation and regression analysis. Considered an example of L-Markov process, the values of the optimal filtering operator with a forecast for which it was possible to express in the form of a linear combination of the values of the process at some moments of time and the sum of numerical values of cosines and sines at the same moments. The basis for obtaining the numerical values of the filtering operator was the mathematical model of trigonometric regression with 16 harmonics, which best approximates the process under study and has a minimum
Keywords: random process, L-Markov process, prediction filtering, spectral characteristics, filtering operator
In the modern world, when technology is developing at an incredible rate, computers have gained the ability to "see" and perceive the world around them like a human. This has led to a revolution in visual data analysis and processing. One of the key achievements was the use of computer vision to search for objects in photographs and videos. Thanks to these technologies, it is possible not only to find objects such as people, cars or animals, but also to accurately indicate their position using bounding boxes or masks for segmentation. This article discusses in detail modern models of deep neural networks used to detect humans in images and videos taken from a height and a long distance against a complex background. The architectures of the Faster Region-based Convolutional Neural Network (Faster R-CNN), Mask Region-based Convolutional Neural Network (Mask R-CNN), Single Shot Detector (SSD) and You Only Look Once (YOLO) are analyzed, their accuracy, speed and ability to effectively detect objects in conditions of a heterogeneous background are compared. Special attention is paid to studying the features of each model in specific practical situations, where both high-quality target object detection and image processing speed are important.
Keywords: machine learning, artificial intelligence, deep learning, convolutional neural networks, human detection, computer vision, object detection, image processing
The article proposes the development of a mathematical model that includes an integrated approach to modeling the interaction of surfaces, taking into account the geometric features of the groove. An important aspect of the novelty of the work is its validation based on experimental data. To describe the movement of the lubricant in the working gap, a model is used that describes the movement of a truly viscous lubricant, including the continuity equation. The calculations and experiments performed have confirmed the adequacy of the proposed model, which indicates the possibility of its practical application for engineering analysis and design. The results of this work made it possible to improve the understanding of the mechanism of movement of the lubricant in radial sliding bearings having a polymer coating with an axial groove on the shaft surface. Studies have also shown that the presence of a groove on the shaft surface affects the pressure distribution, which, in turn, affects the tribotechnical parameters of the bearing. The introduction of the groove helps to distribute the lubricant more efficiently over the working gap, increase the bearing capacity of the bearing, reduce the coefficient of friction and reduce wear on the contact surfaces.
Keywords: radial bearing, wear resistance assessment, antifriction polymer coating, groove, hydrodynamic mode, verification
This paper proposes a mathematical model of the laminar flow of a truly viscous lubricant in the clearance of a radial plain bearing with a nonstandard support profile. The influence of a fluoroplastic-containing polymer coating and a groove on the shaft surface is considered, taking into account nonlinear effects, which improves the accuracy of the description of hydrodynamic processes. Thin-film approximations and continuity equations are used to determine the hydrodynamic pressure, load capacity, and friction coefficient. A comparison with existing calculation models demonstrated improved performance prediction. The results demonstrate the feasibility of ensuring stable shaft floatation, confirming the applicability of the developed model for engineering calculations of bearings with a polymer coating and a groove.
Keywords: radial plain bearing, mathematical modeling, true viscous lubricant, polymer composite coating, hydrodynamic regime, tribotechnical characteristics
As part of the work, finite element modeling and numerical studies of single-layer cylindrical rod roofs were carried out. Horizontal faces are introduced and the influence of the side elements on the stress-strain state of the structure is determined. The degree of change in the forces and movement of the nodes has been determined. The features of the work of roof are analyzed and dangerous areas with maximum parameters are identified. The analysis of the introduction of horizontal faces is performed by comparing the obtained patterns. A positive redistribution of forces on the surface has been achieved and a reduction in the movements of characteristic nodes has been ensured. There is a space for wiring engineering communications.
Keywords: cylindrical rod roofs, horizontal faces, side elements, forces, displacements
This article presents the development of a combined method for summarizing Russian-language texts, integrating extractive and abstractive approaches to overcome the limitations of existing methods. The proposed method is preceded by the following stages: text preprocessing, comprehensive linguistic analysis using RuBERT, and semantic similarity-based clustering. The method involves extractive summarization via the TextRank algorithm and abstractive refinement using the RuT5 neural network model. Experiments conducted on the Gazeta.Ru news corpus confirmed the method's superiority in terms of precision, recall, F-score, and ROUGE metrics. The results demonstrated the superiority of the combined approach over purely extractive methods (such as TF-IDF and statistical methods) and abstractive methods (such as RuT5 and mBART).
Keywords: combined method, summarization, Russian-language texts, TextRank, RuT5
The paper considers a stochastic model of the operation of an automatic information processing system, which is described by a system of differential equations of the Kolmogorov distribution of state probabilities, assuming that the flow of requests is Poisson, including the simplest one. A scheme for solving a system of differential equations of high dimensionality with slowly changing initial data is proposed, and the parameters of the presented model are compared with the parameters of the simulation model of the Apache HTTP Server. To compare the simulation and stochastic models, a test server was used to generate requests and simulate their processing using the Apache JMeter program, which was used to estimate the parameters of the incoming and processed request flows. The presented model does not contradict the simulation model and allows us to evaluate the system's states under different operating conditions and calculate the load on the web server when there is a large amount of data.
Keywords: stochastic modeling, simulation model, Kolmogorov equations, sweep method, queuing system, performance characteristics, test server, request flow, service channels, queue
This article examines the results of computer simulations of adhesive bond tear testing. Simulation models of adhesive bond tearing were constructed taking into account two stages of sample testing, the geometric structure, and the physical and mechanical properties of the materials and adhesive. The modeling took into account the statistical dispersion of parameters at the micro-level of the process. The article describes the algorithm for the sample testing process and evaluates its behavior depending on the values and variations of the material parameters.
Keywords: Computer simulation, model, tear test, adhesive bond, material strength, simulation results, stress concentration
This article discusses numerical modeling of a plywood roof panel based on a finite element model. The modeling of the panel skins was performed taking into account the orthotropy of the plywood. When calculating the roof panel's deformability, it is necessary to account for the reduction in structural rigidity during operation by introducing a reduction factor. A computational study of the roof structure's deformability allowed us to establish a coefficient for the utilization of the roof panel's cross-section rigidity.
Keywords: plywood roofing board, glued laminated board element, modulus of elasticity, volumetric density, Poisson's ratio, design span, standard load, four-node finite element, section moment of inertia
Introduction.
The performance of multi-story Cross-Laminated Timber (CLT) structures depends on their steel-to-wood connections. Adoption is hindered by a lack of standardized design methods, as current practices often use overly rigid idealizations that fail to capture the joint's true compliance and nonlinear failure.
Aims and Objectives.
This paper presents a verified numerical methodology for simulating the nonlinear stiffness of steel-to-CLT connections. The objective is to establish a procedure for calibrating a Cohesive Zone Model (CZM) using experimental data for global structural analysis.
Materials and Methods.
The study uses the CZM in Ansys Mechanical via 'Contact Debonding', governed by a bilinear Traction-Separation Law (TSL). Model parameters were calibrated against experimental pull-out tests (Mode II dominant) on steel screws in CLT. A key finding is that the model's "interface" stiffness requires iterative adjustment and is not equal to the global "system" stiffness, as it must be decoupled from material elasticity.
Results and Discussion.
The calibrated numerical model replicated experimental force-displacement diagrams with high fidelity. The simulation predicted the peak pull-out force (4.55% discrepancy) and its corresponding displacement (5.67% discrepancy), capturing the elastic phase, peak load, and post-peak softening. The validated methodology provides an engineering-accurate (4–6% deviation) tool for modeling nonlinear joint compliance. This approach allows designers to replace inaccurate rigid-body assumptions, reducing uncertainty and enabling a more realistic stiffness assessment of CLT structures.
Keywords: cross-laminated timber, stiffness of joint connections, joint connection, cohesive zone material, nonlinearity
Overheating of photovoltaic modules (PVMs) is a key problem leading to a decrease in their efficiency and service life, especially in regions with high levels of solar radiation. The existing models are not detailed enough to accurately predict the thermal conditions of thin-film micromorphic modules under real-world operating conditions. The paper develops a three-dimensional finite element model of the Pramac 125 micromorphic module in the Ansys software package, which takes into account the geometric, optical and thermodynamic characteristics of all layers. To verify the model, a field experiment was conducted in the Astrakhan region with the registration of the module temperature and meteorological parameters. The validation confirmed the high accuracy of the model: the coefficient of determination R² between the calculated and experimental data was 0.9991. The model makes it possible to estimate thermal conditions and associated energy losses, which justifies the need to use cooling systems in the southern regions of Russia.
Keywords: photovoltaic module, micromorphic technology, solar radiation, temperature regime, output power, mathematical modeling, finite element method, thermal regimes, model validation, numerical experiment
The paper considers the effect of particle size on the dynamics of suspended sediments in a riverbed. The EcoGIS-Simulation computing complex is used to simulate the joint dynamics of surface waters and sediments in the Volga River model below the Volga hydroelectric dam. The most important factor in the variability of the riverbed is the spring releases of water from the Volgograd reservoir, when water consumption increases fivefold. Some integral and local characteristics of the riverbed are calculated depending on the particle size coefficient.
Keywords: suspended sediment, soil particle size, sediment dynamics, diffusion, bottom sediments, channel morphology, relief, particle gravitational settling velocity, EcoGIS-Simulation software and hardware complex, Wexler formula, water flow
The article examines the influence of the data processing direction on the results of the discrete cosine transform (DCT). Based on the theory of groups, the symmetries of the basic functions of the DCT are considered, and the changes that occur when the direction of signal processing is changed are analyzed. It is shown that the antisymmetric components of the basis change sign in the reverse order of counts, while the symmetric ones remain unchanged. Modified expressions for block PREP are proposed, taking into account the change in the processing direction. The invariance of the frequency composition of the transform to the data processing direction has been experimentally confirmed. The results demonstrate the possibility of applying the proposed approach to the analysis of arbitrary signals, including image processing and data compression.
Keywords: discrete transforms, basic functions, invariance, symmetry, processing direction, matrix representation, correlation
Modern engineering equipment operation necessitates solving optimal control problems based on measurement data from numerous physical and technological process parameters. The analysis of multidimensional data arrays for their approximation with analytical dependencies represents both current and practically significant challenges. Existing software solutions demonstrate limitations when working with multidimensional data or provide only fixed sets of basis functions.
Objectives. The aim of this study is to develop software for multidimensional regression based on the least squares method and a library of constructible basis functions, enabling users to create and utilize diverse basis functions for approximating multidimensional data.
Methods. The development employs a generalized least squares method model with loss function minimization in the form of a multidimensional elliptical paraboloid. LASSO (L1), ridge regression (L2), and Elastic Net regularization mechanisms enhance model generalization and numerical stability. A precomputation strategy reduces asymptotic complexity from O(b²·N·f·log₂(p)) to O(b·N·(b+f·log₂(p))). The software architecture includes recursive algorithms for basis function generation, WebAssembly for computationally intensive operations, and modern web technologies including Vue3, TypeScript, and visualization libraries.
Results. The developed web application provides efficient approximation of multidimensional data with 2D and 3D visualization capabilities. Quality assessment employs MSE, R², and AIC metrics. The software supports XLSX data loading and intuitive basis function construction through a user-friendly interface.
Conclusion. The practical value lies in creating a publicly accessible tool at https://datapprox.com for analyzing and modeling complex multidimensional dependencies without requiring additional software installation.
Keywords: approximation, least squares method, basic functions, multidimensional regression, L1/L2 regularization, web application, multidimensional elliptical paraboloid
The study addresses the problem of short-term forecasting of ice temperature in engineering systems with high sensitivity to thermal loads. A transformer-based architecture is proposed, enhanced with a physics-informed loss function derived from the heat balance equation. This approach accounts for the inertial properties of the system and aligns the predicted temperature dynamics with the supplied power and external conditions. The model is tested on data from an ice rink, sampled at one-minute intervals. A comparative analysis is conducted against baseline architectures including LSTM, GRU, and Transformer using MSE, MAE, and MAPE metrics. The results demonstrate a significant improvement in accuracy during transitional regimes, as well as robustness to sharp temperature fluctuations—particularly following ice resurfacing. The proposed method can be integrated into intelligent control loops for engineering systems, providing not only high predictive accuracy but also physical interpretability. The study confirms the effectiveness of incorporating physical knowledge into neural forecasting models.
Keywords: short-term forecasting, time series analysis, transformer architecture, machine learning, physics-informed modeling, predictive control