×

You are using an outdated browser Internet Explorer. It does not support some functions of the site.

Recommend that you install one of the following browsers: Firefox, Opera or Chrome.

Contacts:

+7 961 270-60-01
ivdon3@bk.ru

  • Approaches to reducing computational and resource requirements in artificial intelligence systems for moving objects

    The integration of artificial intelligence into mobile devices is fraught with serious challenges, especially due to the limited resources available and the requirements for real-time data processing. The article discusses modern approaches to reducing computing costs and resources in systems for mobile objects with artificial intelligence, including model optimization, and computing allocation strategies for mobile platforms with limited resources.

    Keywords: artificial intelligence, moving objects, lightweight models, peripheral models, hardware acceleration, knowledge distillation, quantization

  • Analysis of the Current State of the Monitoring Process in Spacecraft Flight Control

    The article presents an analysis of the current state of the monitoring process in spacecraft (SC) flight control. It outlines the state of monitoring technologies currently used in the flight control of modern spacecraft. The shortcomings of the monitoring process, which are exacerbated by the development of space technology, are identified. To address these shortcomings, the use of new intelligent methods is proposed, which, by increasing the automation of the spacecraft flight monitoring process, will enhance the reliability and efficiency of control. Promising methods for improving the reliability of spacecraft monitoring using artificial intelligence technologies, in particular artificial neural networks (ANN), are considered.
    An analysis of scientific publications on the application of ANNs in space technology was conducted; examples of ANN application in flight control, diagnostics, and data processing tasks are provided. The advantages and limitations of using neural networks in space technology are examined.

    Keywords: spacecraft, flight control, monitoring, state analysis, flight management, telemetry data

  • About modeling hysteresis characteristics in problems of nonlinear control system synthesis

    An algorithm for modeling smooth hysteresis nonlinearities is proposed, taking into account the slope coefficient k and the saturation level c. The developed model provides accuracy and ease of adjustment while maintaining intuitive physical parameters of the hysteresis loop, which makes it effective for practical application in the tasks of analysis and synthesis of nonlinear control systems.

    Keywords: unambiguous nonlinearities, hysteresis, automatic control systems, backlash with saturation, ambiguous nonlinearities, algorithm, modeling of automatic control systems, relay, static characteristic, approximation

  • Parametric synthesis of automatic control systems with ambiguous nonlinear characteristics

    The paper proposes a modified method for the parametric synthesis of nonlinear automatic control systems with ambiguous nonlinearities based on the generalized Galerkin method. To eliminate the need to determine switching points, the transient process is approximated by two sections: a polynomial before the time tp and a constant value after it. This allows us to obtain recurrent formulas for Bqi integrals, simplify calculations, and maintain the absolute stability of the system.

    Keywords: parametric synthesis, nonlinear automatic control systems, generalized Galerkin method, ambiguous (multi-valued) nonlinearities, hysteresis, switching points, polynomial approximation, impulse automatic control systems, recurrent relations

  • Reducing the accident probability of unmanned aerial systems considering route complexity

    The study addresses the problem of reducing the probability of emergency situations involving unmanned aerial systems. Accidents are regarded as outcomes of combinations of events that are individually of relatively low hazard. Causal relationships are represented by fault trees, in which the root node corresponds to an accident, the leaves correspond to basic events, and intermediate nodes describe their logical combinations. Accident scenarios are associated with the minimal cut sets of the fault tree. To identify accident prevention strategies, the concept of successful-operation paths is employed. Each such path is defined as a set of nodes having a non-empty intersection with all minimal cut sets. It is assumed that preventing all events included in a successful-operation path renders the development of accident scenarios impossible.
    The study additionally accounts for the fact that the same flight mission may be executed along routes of differing complexity. Route complexity influences the cost estimates of measures aimed at preventing the events that form accident-related combinations. A model example is provided that includes the assessment of the complexity of two routes, a table of mitigation costs for basic events, and the selection of an appropriate successful-operation path for accident prevention. The proposed methodology is intended to reduce the probability of the realization of accident-inducing event combinations to an acceptable level while adhering to operational constraints and the mission-specific requirements of the flight task.

    Keywords: unmanned aerial systems, emergency combination of events, fault tree, route complexity, logical-probabilistic security analysis

  • Methods of differential anonymization of data based on a trustworthy neural network for protecting bank customers personal information

    The article discusses modern methods for protecting bank customers' personal information based on differential anonymization of data using trusted neural networks. It provides an overview of the regulatory framework, analyzes technological approaches and describes a developed multi-level anonymization model that combines cryptographic and machine learning techniques. Special attention is paid to balancing between preserving data utility and minimizing the risk of customer identity disclosure.

    Keywords: differential anonymization, trusted neural network, personal data, banking technologies, information security, cybersecurity

  • Research on Kalman Filtering Method Fusion Based on Deep Learning

    This article analyzes the limitations of the standalone use of Kalman filters in complex dynamic systems and systematizes modern advances in the integration of deep learning methods. Practical aspects of the combined application of deep learning and Kalman filters are explored, demonstrating improved accuracy and reliability of solutions under dynamic conditions, noisy environments, and complex environmental factors. Finally, promising directions for the development of multisensor data fusion methods are outlined.

    Keywords: deep learning, integrated navigation, multi-source data fusion, Kalman filter, extended Kalman filter

  • A method for determining the coordinates and sector parameters of base stations of fourth-generation mobile radio networks for mobile device positioning tasksс

    A reproducible method is presented for the autonomous determination of the coordinates of the base stations of fourth-generation mobile radio networks and the parameters of their sectors, based solely on field observations of the modem without using time delay and methods for estimating the angles of arrival of the signal. The approach combines robust allocation of an informative "core" of measurements, weighted minimization of distances and aggregation at the site level, providing stable estimates in urban environments. Experimental verification in a real scene demonstrates a significant reduction in localization error compared to the basic centroid and median methods, which confirms the practical applicability of the proposed solution.

    Keywords: LTE, positioning, localization, base station, site coordinate, signal strength, angular distribution, sectorization, optimization, observation, geometric median, field recording, minimization method, radio signal

  • Comparative analysis of recurrent networks and transformer models in predictive monitoring

    The work compares the use of recurrent networks and models based on transformer architecture to solve the problem of predicting the completion time of a business process. Models, by definition, model the sequence of actions and are able to take into account a variety of attributes in determining target characteristics. For comparison, a recurrent model of long-term short-term memory and a transformer encoder of its own architecture were used, the operation of which was tested on openly presented real data from the logs of the support service. The models were trained and tested using Python using the pandas, numpy, and torch libraries with the same data preparation, prefix generation, and time division for both models. The comparison as a result of experiments on the average absolute error showed the advantage of the transformer encoder; approximately the same accuracy of the models with a slightly higher accuracy of the transformer model was recorded for the standard deviation.

    Keywords: predictive monitoring, event log, machine learning, transformer encoder, neural networks, data preparation, regression model, normalization, padding, recurrent network, model architecture

  • Interpretable Multi-Criteria Evaluation for Detecting Key Points on 3D Surfaces

    The study addresses the problem of identifying key points on three-dimensional surfaces of physical objects that are required for placement or fixation of elements in engineering and medical applications. Direct determination of such points on real objects is limited by restricted access, geometric variability, and high accuracy requirements; therefore, digital 3D models incorporating both external and internal structure are used as the basis for analysis. An interpretable multi-criteria suitability evaluation approach is proposed, based on fuzzy logic inference, enabling integration of strict constraints and preferential criteria originating from different subject domains. The methodological framework combines systematic literature analysis, expert knowledge integration, and mathematical formalization of multi-criteria decision making. Particular attention is given to explainability and transferability, which are critical for medical scenarios (anatomical landmarks) as well as engineering and robotics scenarios (geometric and technological landmarks). The developed model generates suitability heatmaps and automatically identifies a set of admissible points that is consistent with expert annotations and safety requirements.

    Keywords: key point detection; fuzzy logic inference; decision support expert system

  • Control of the flight altitude of a helicopter-type aircraft

    The paper considers a detailed mathematical model of a helicopter-type aircraft autopilot implemented in the Matlab/Simulink simulation environment. Computer simulation is used to examine the behaviour of the system without the application of control laws, confirming the need for correction. To compare the performance of controllers, a comparison is made between a proportional-derivative type fuzzy logic controller with linear and nonlinear control laws.

    Keywords: helicopter, altitude control, fuzzy logic controller, automatic control system, flight safety

  • Weighting of categories in hierarchical classifiers within information retrieval systems

    Two approaches to weighting vertices in rooted trees of hierarchical classifiers that characterize subject domains of various information retrieval systems are presented. Weight characteristics are examined depending on the position of vertices in the rooted tree — namely, on the level, depth, number of directly subordinate vertices (the «sons» clan), and number of hierarchically subordinate leaf vertices. The specifics of vertex weighting in a rooted tree are illustrated based on: weighted summation of level‑based, depth‑based, clan‑based, and leaf‑based vertex weights; hierarchically additive summation of leaf weight characteristics.

    Keywords: information retrieval systems, hierarchical subject classifiers, hierarchical categorizers, subject indexing, vertex weighting in a rooted tree, level‑based weight characteristic of a vertex, depth‑based weight characteristic of a vertex

  • Analysis of the quality of work of programs that evaluate the originality of a text using a text generated by a neural network

    The article presents an analysis of the quality of services to assess the originality of the "anti-plagiarism" text using the example of texts generated by a neural network. For the analysis, three originality assessment services and texts generated by three different neural networks were used.

    Keywords: text originality, anti-plagiarism, neural networks, text analysis, borrowings

  • System analysis of genre-event typology of financial information without prior training

    A comprehensive method of system analysis and processing of financial information is proposed without prior training in five complementary taxonomies (genre, type of event, tonality, level of influence, temporality) with simultaneous extraction of entities. The method is based on an ensemble of three specialized instructions for a local artificial intelligence model with an adapted majority voting algorithm and a two-level mechanism for explicable failures. The protocol was validated by comparative testing of 14 local models on 100 expertly marked units of information, while the model achieved 90% processing accuracy. The system implements the principles of self-consistency and selective classification, is reproduced on standard equipment and does not require training on labeled data.

    Keywords: organizational management, software projects, intelligent decision support system, ontological approach, artificial intelligence

  • Development of approaches to the use of information technology to improve the management of objects and processes of the railway passenger complex

    This article examines the problem of control and management in transport systems using the example of passenger rail rolling stock operation processes using information technology and automation tools. The main proposed methods for improving the efficiency of vehicle operation management are the use of digital modeling of transport complex objects and processes and the automation of probabilistic-statistical analysis of data on the technical and operational characteristics of the system. The objective of the study is to improve the operational efficiency, reliability, and safety of passenger rail rolling stock by developing digital twins of the rolling stock and digital models of its operation processes. The main objectives of the study are to develop approaches to automating methods of  analysis of the flow of data on the operation and technical condition of passenger rolling stock, as well as to develop a concept for applying digital modeling to solve current problems of passenger rail transport. The research hypothesis is based on the assumption of the effectiveness of applying new information technologies to solving practical problems of rolling stock operation management. The use of digital models of rolling stock units and the digitalization of the repair process are considered. The paper proposes the use of automated Pareto analysis methods for data on technical failures of railcars and least-squares modeling of distribution and density functions for passenger wagon operating indicators as continuous random variables. It is demonstrated that digital modeling of transport system objects and processes using big data analysis enables the improvement of transportation processes. General recommendations are provided for the use of information tools to improve the management of passenger rolling stock operations in rail transport.

    Keywords: information technologies, digital modeling, digital twin, automated control, system analysis, process approach, reliability, rolling stock operation, maintenance and repair, monitoring systems

  • Modeling of a project network schedule under resource constraints

    Construction work often involves risks when carrying out complex sets of tasks described in the form of network schedules, in particular, risks of violating tender deadlines and project costs. One of the main reasons for increased project risks is a lack of resources. The main objective of this study is to develop a methodology for modeling network schedules under resource constraints, taking into account the stochastic influence of risks on project completion deadlines. The paper analyzes tools for modeling project schedules; describes a mathematical model for estimating project cost based on a network schedule under resource constraints; proposes a method for modeling a network schedule in the AnyLogic environment; develops an algorithm for modeling parallel branches of a project schedule under resource constraints; and describes a method for modeling a network schedule for project work. Testing was conducted based on a network schedule for a project to construct a contact line support. It has been shown that the method allows for obtaining probabilistic estimates of project deadlines and costs under conditions of risk and limited resources. The methodology can be applied to various projects described by network schedules and will allow solving a number of practical tasks: optimizing resources allocated for project implementation, taking into account the time and cost of the project, analyzing risks affecting project implementation, and developing optimal solutions for project risk management.

    Keywords: network schedule, work plan, simulation modeling, risk analysis, project duration, project cost

  • Genetic search as a tool for overcoming linguistic uncertainty

    This article describes a developed method for automatically optimizing the parameters of an intelligent controller based on an adaptive genetic algorithm. The key goal of this development is to improve the mechanism for generating an intelligent controller rule base through multiparameter optimization. The genetic algorithm is used to eliminate linguistic uncertainty in the design of control systems based on intelligent controllers. A unique algorithm is proposed that implements a comprehensive optimization procedure structured in three sequential stages: identifying optimal control system parameters, optimizing the structure of the intelligent controller rule base, simulating the automatic generation process, and then optimizing the intelligent controller parameters. Implementation of this approach optimizes the weights of fuzzy logic rules and the centers of the membership functions of linguistic variables.

    Keywords: intelligent controller, optimization, genetic algorithm, uncertainty, term set

  • Simulation of the design activity diversification of innovative enterprise

    The article presents a comparative analysis of modern database management systems (PostgreSQL/PostGIS, Oracle Database, Microsoft SQL Server, and MongoDB) in the context of implementing a distributed storage of geospatial information. The aim of the study is to identify the strengths and limitations of different platforms when working with heterogeneous geospatial data and to evaluate their applicability in distributed GIS solutions. The research covers three main types of data: vector, raster, and point cloud. A comprehensive set of experiments was conducted in a test environment close to real operating conditions, including functional testing, performance benchmarking, scalability analysis, and fault tolerance assessment.
    The results demonstrated that PostgreSQL/PostGIS provides the most balanced solution, showing high scalability and stable performance across all data types, which makes it a versatile platform for building GIS applications. Oracle Database exhibited strong results when processing raster data and proved effective under heavy workloads in multi-node architectures, which is especially relevant for corporate environments. Microsoft SQL Server showed reliable performance on vector data, particularly in distributed scenarios, though requiring optimization for binary storage. MongoDB proved suitable for storing raster content and metadata through GridFS, but its scalability is limited compared to traditional relational DBMS.
    In conclusion, PostgreSQL/PostGIS can be recommended as the optimal choice for projects that require universality and high efficiency in distributed storage of geospatial data, while Oracle and Microsoft SQL Server may be preferable in specialized enterprise solutions, and MongoDB can be applied in tasks where flexible metadata management is a priority.

    Keywords: geographic information system, database, postgresql, postgis, oracle database, microsoft sql server, mongodb, vector, raster, point cloud, scalability, performance, fault tolerance

  • On the task of building independent data processing architectures in intelligent transport systems

    The article discusses modern approaches to the design and implementation of data processing architectures in intelligent transport systems (ITS) with a focus on ensuring technological sovereignty. Special attention is paid to the integration of machine learning practices to automate the full lifecycle of machine learning models: from data preparation and streaming to real-time monitoring and updating of models. Architectural solutions using distributed computing platforms such as Hadoop and Apache Spark, in-memory databases on Apache Ignite, as well as Kafka messaging brokers to ensure reliable transmission of events are analyzed. The importance of infrastructure flexibility and scalability, support for parallel operation of multiple models, and reliable access control, including security issues, and the use of transport layer security protocols, is emphasized. Recommendations are given on the organization of a logging and monitoring system for rapid response to changes and incidents. The presented solutions are focused on ensuring high fault tolerance, safety and compliance with the requirements of industrial operation, which allows for efficient processing of large volumes of transport data and adaptation of ITS systems to import-independent conditions.

    Keywords: data processing, intelligent transport systems, distributed processing, scalability, fault tolerance

  • Variational Model for Digital Restoration of Monumental Painting

    The paper addresses the problem of digital restoration of monumental painting through the reconstruction of lost color fragments. A variational model is proposed, illustrating a two-stage approach: segmentation of the image with identification of damaged regions using a convolutional neural network based on the U-Net architecture, followed by color reconstruction in the selected areas using a convolutional autoencoder in the CIELAB color space. The specific features of applying the discussed neural networks, the data preparation workflow, and the training parameters are described. The results demonstrate that the proposed approach provides reliable detection of defective areas and high accuracy of color restoration while preserving the artistic style of the original. The limitations of the method and potential directions for further development are also discussed.

    Keywords: variational model, monumental painting, digital restoration, image segmentation, convolutional neural network, color reconstruction, convolutional autoencoder, CIELAB color space

  • Creating a C# Application for Modeling Maximum Flow in a Transportation Network

    This article examines transportation network modeling using the Ford–Fulkerson algorithm. It describes the process of finding a minimum cut using a graphical editor and library developed in the C# programming language. Key concepts of graph and network theory are presented to clarify the problem statement. An example of solving a transportation problem using the developed software is shown, and the program's results are compared with a control example.

    Keywords: transportation network, maximum flow problem, Ford–Fulkerson algorithm, minimum cut in the network, software library, graphical editor, C# programming language

  • Optimizing Error Rates in 3D Printing and Additive Manufacturing Processes with Artificial Intelligence Integration

    The article discusses the problems and solutions of 3D printing and additive manufacturing in the context of the integration of artificial intelligence. With increasing demands for quality and efficiency, artificial intelligence is becoming a key element of process optimization. The factors influencing the suitability of models for 3D printing, including time, cost and materials, are analyzed. Optimization methods such as genetic algorithms and machine learning can simplify testing and evaluation tasks. Genetic algorithms provide flexibility in solving complex problems, improve quality, and reduce the likelihood of errors. In conclusion, the importance of further research on artificial intelligence for improving the productivity and quality of additive manufacturing is emphasized.

    Keywords: artificial intelligence, 3D printing, additive manufacturing, machine learning, process optimization, genetic algorithms, product quality, automation, productivity, geometric complexity

  • Comparative analysis of the performance of the lpSolve, Microsoft Solver Foundation, and Google OR-Tools libraries using the example of a high-dimensional linear Boolean programming problem

    The article presents a comparative analysis of the performance of three solver programs (based on the libraries lpSolve, Microsoft Solver Foundation and Google OR-Tools) when solving a large-dimensional linear Boolean programming problem. The study was conducted using the example of the problem of identifying the parameters of a homogeneous nested piecewise linear regression of the first type. The authors have developed a testing methodology that includes generating test data, selecting hardware platforms, and identifying key performance metrics. The results showed that Google OR-Tools (especially the SCIP solver) demonstrates the best performance, surpassing analogues by 2-3 times. The Microsoft Solver Foundation has shown stable results, while the lpSolve IDE has proven to be the least productive, but the easiest to use. All solvers provided comparable accuracy of the solution. Based on the analysis, recommendations are formulated for choosing a solver depending on performance requirements and integration conditions. The article is of practical value for specialists working with optimization problems and researchers in the field of mathematical modeling.

    Keywords: regression model, homogeneous nested piecewise linear regression, parameter estimation, method of least modules, linear Boolean programming problem, index set, comparative analysis, software solvers, algorithm performance, Google OR-Tools

  • Comparison of relational databases for use in information systems

    The work reflects the importance of relational databases for storing information. The capabilities and disadvantages of well-known databases Oracle Database, MySQL, Microsoft Access are analyzed. It is found that the Access database is more suitable for storing information in local information systems, and MySQL is used to develop web applications.

    Keywords: databases, data storage, Oracle, MySQL, Microsoft Access

  • Development of Expert Systems Based on Large Language Model and Augmented Sampling Generation

    This research investigates the development of expert systems (ES) based on large language models (LLMs) enhanced with augmented generation techniques. The study focuses on integrating LLMs into ES architectures to enhance decision-making processes. The growing influence of LLMs in AI has opened new possibilities for expert systems. Traditional ES require extensive development of knowledge bases and inference algorithms, while LLMs offer advanced dialogue capabilities and efficient data processing. However, their reliability in specialized domains remains a challenge. The research proposes an approach combining LLMs with augmented generation, where the model utilizes external knowledge bases for specialized responses. The ES architecture is based on LLM agents implementing production rules and uncertainty handling through confidence coefficients. A specialized prompt manages system-user interaction and knowledge processing. The architecture includes agents for situation analysis, knowledge management, and decision-making, implementing multi-step inference chains. Experimental validation using YandexGPT 5 Pro demonstrates the system’s capability to perform core ES functions: user interaction, rule application, and decision generation. Combining LLMs with structured knowledge representation enhances ES performance significantly. The findings contribute to creating more efficient ES by leveraging LLM capabilities with formalized knowledge management and decision-making algorithms.

    Keywords: large language model, expert system, artificial intelligence, decision support, knowledge representation, prompt engineering, uncertainty handling, decision-making algorithms, knowledge management