×

You are using an outdated browser Internet Explorer. It does not support some functions of the site.

Recommend that you install one of the following browsers: Firefox, Opera or Chrome.

Contacts:

+7 961 270-60-01
ivdon3@bk.ru

  • Simulation of the design activity diversification of innovative enterprise

    The article presents a comparative analysis of modern database management systems (PostgreSQL/PostGIS, Oracle Database, Microsoft SQL Server, and MongoDB) in the context of implementing a distributed storage of geospatial information. The aim of the study is to identify the strengths and limitations of different platforms when working with heterogeneous geospatial data and to evaluate their applicability in distributed GIS solutions. The research covers three main types of data: vector, raster, and point cloud. A comprehensive set of experiments was conducted in a test environment close to real operating conditions, including functional testing, performance benchmarking, scalability analysis, and fault tolerance assessment.
    The results demonstrated that PostgreSQL/PostGIS provides the most balanced solution, showing high scalability and stable performance across all data types, which makes it a versatile platform for building GIS applications. Oracle Database exhibited strong results when processing raster data and proved effective under heavy workloads in multi-node architectures, which is especially relevant for corporate environments. Microsoft SQL Server showed reliable performance on vector data, particularly in distributed scenarios, though requiring optimization for binary storage. MongoDB proved suitable for storing raster content and metadata through GridFS, but its scalability is limited compared to traditional relational DBMS.
    In conclusion, PostgreSQL/PostGIS can be recommended as the optimal choice for projects that require universality and high efficiency in distributed storage of geospatial data, while Oracle and Microsoft SQL Server may be preferable in specialized enterprise solutions, and MongoDB can be applied in tasks where flexible metadata management is a priority.

    Keywords: geographic information system, database, postgresql, postgis, oracle database, microsoft sql server, mongodb, vector, raster, point cloud, scalability, performance, fault tolerance

  • On the task of building independent data processing architectures in intelligent transport systems

    The article discusses modern approaches to the design and implementation of data processing architectures in intelligent transport systems (ITS) with a focus on ensuring technological sovereignty. Special attention is paid to the integration of machine learning practices to automate the full lifecycle of machine learning models: from data preparation and streaming to real-time monitoring and updating of models. Architectural solutions using distributed computing platforms such as Hadoop and Apache Spark, in-memory databases on Apache Ignite, as well as Kafka messaging brokers to ensure reliable transmission of events are analyzed. The importance of infrastructure flexibility and scalability, support for parallel operation of multiple models, and reliable access control, including security issues, and the use of transport layer security protocols, is emphasized. Recommendations are given on the organization of a logging and monitoring system for rapid response to changes and incidents. The presented solutions are focused on ensuring high fault tolerance, safety and compliance with the requirements of industrial operation, which allows for efficient processing of large volumes of transport data and adaptation of ITS systems to import-independent conditions.

    Keywords: data processing, intelligent transport systems, distributed processing, scalability, fault tolerance

  • Development of Expert Systems Based on Large Language Model and Augmented Sampling Generation

    This research investigates the development of expert systems (ES) based on large language models (LLMs) enhanced with augmented generation techniques. The study focuses on integrating LLMs into ES architectures to enhance decision-making processes. The growing influence of LLMs in AI has opened new possibilities for expert systems. Traditional ES require extensive development of knowledge bases and inference algorithms, while LLMs offer advanced dialogue capabilities and efficient data processing. However, their reliability in specialized domains remains a challenge. The research proposes an approach combining LLMs with augmented generation, where the model utilizes external knowledge bases for specialized responses. The ES architecture is based on LLM agents implementing production rules and uncertainty handling through confidence coefficients. A specialized prompt manages system-user interaction and knowledge processing. The architecture includes agents for situation analysis, knowledge management, and decision-making, implementing multi-step inference chains. Experimental validation using YandexGPT 5 Pro demonstrates the system’s capability to perform core ES functions: user interaction, rule application, and decision generation. Combining LLMs with structured knowledge representation enhances ES performance significantly. The findings contribute to creating more efficient ES by leveraging LLM capabilities with formalized knowledge management and decision-making algorithms.

    Keywords: large language model, expert system, artificial intelligence, decision support, knowledge representation, prompt engineering, uncertainty handling, decision-making algorithms, knowledge management

  • Web Solution for Automating Work Time Tracking and Task Coordination at Enterprises

    This paper addresses the challenges of automating enterprise-level business processes through the development of a specialized web application designed for task planning and working time accounting. The research focuses on solving problems related to organizational efficiency within the context of contemporary digital transformations. An innovative architectural design is proposed, leveraging advanced web technologies such as React for the frontend, Node.js with Express.js for the backend, and MySQL for reliable data storage. Particular emphasis is placed on utilizing the Effector library for efficient state management, resulting in significant performance improvements by reducing redundant UI renders. The developed solution offers robust features that enhance operational transparency, resource allocation optimization, and overall productivity enhancement across departments. Furthermore, economic justification demonstrates cost savings achieved through streamlined workflows and reduced administrative overhead. The practical implementation of this system has shown promising results, providing businesses with enhanced capabilities to manage tasks effectively while ensuring scalability and adaptability to future growth requirements. 

    Keywords: automation of business processes, task management, time management, web application, React, Node.js, Effector, information systems performance

  • Autotest Quality Metrics: How to Measure the Efficiency and Reliability of Automation

    In the context of modern continuous integration and delivery processes, it is critically important not only to have automated tests, but also their real effectiveness, reliability and economic feasibility. In this paper, the key metrics for evaluating the quality of automated testing are systematized, with a special focus on the problem of unstable tests. New indicators have been introduced and justified: the level of unstable tests and the loss of the continuous integration pipeline, which directly reflect the costs of maintaining the test infrastructure. The limitations of traditional metrics, in particular code coverage, are analyzed in detail, and the superiority of mutation testing as a more reliable indicator of the test suite's ability to detect defects is demonstrated. Key dependencies have been identified on the demonstration data of the real continuous integration pipeline: an increase in code coverage does not guarantee an improvement in mutation testing and does not lead to an increase in the number of detected defects; a high proportion of unstable tests correlates with significant losses of machine time and a decrease in confidence in test results.; Reducing the time to detect and eliminate defects is achieved not only by increasing coverage, but also by reducing the proportion of unstable tests, improving system observability, and strengthening defect management discipline.

    Keywords: quality metrics for automated testing, mutation testing, unstable tests, code coverage, empirical metric analysis, comparative analysis of testing metrics, optimization of testing processes, cost-effectiveness of automation, software quality management

  • System analysis of the methodology for estimating the parameters of a complex technical system using the interval estimation method

    Problem statement. When modeling a complex technical system, the issues of parameter estimation are of primary importance. To solve this problem, it is necessary to obtain a methodology that allows eliminating errors and inaccuracies in obtaining numerical parameters. Goal. The article is devoted to a systematic analysis of the methodology for estimating the parameters of a complex technical system using the interval estimation method. The research method. A systematic analysis of the methods of using interval estimates of numerical parameters is carried out. The decomposition and structuring of the methods were carried out. Results. The expediency of using a methodology for describing the parameters of a complex technical system using the interval estimation method is shown. An analysis of the use of various interval estimation models is presented. Practical significance. Application in the analysis and construction of complex systems is considered as a practical application option. The method of estimating the parameters of a complex technical system using the interval estimation method can be used as a practical guide.

    Keywords: interval estimation, parameter estimation, numerical data, fuzzy data, complex technical systems

  • Experimental Evaluation of the Effectiveness of a Schema Matching Method Based on Machine Learning

    This article investigates the problem of structured data schema matching and aggregates the results from previous stages of the research. The systematization of results demonstrated that while the previously considered approaches show promising outcomes, their effectiveness is often insufficient for real-world application. One of the most effective methods was selected for further study. The Self-Organizing Map method was analyzed, which is based on a criterial analysis of the attribute composition of schemas, using an iterative approach to minimize the distance between points (in the current task, a point represents a schema attribute). An experiment on schema matching was conducted using five examples. The results revealed both the strengths and limitations of the method under investigation. It was found that the selected method exhibits insufficient robustness and reproducibility of results on diverse real-world datasets. The verification of the method confirmed the need for its further optimization. The conclusion outlines directions for future research in this field.

    Keywords: data management, fusion schemes, machine learning, classification, clustering, machine learning, experimental analysis, data metrics

  • A systematic approach to creating lithium-ion batteries: the relationship between the main battery components and the principle of operation

    The article discusses the application of a systematic approach to the development and optimization of lithium-ion batteries (LIBs). Traditional methods that focus on improving individual components (anode, cathode, and electrolyte) often do not lead to a proportional increase in the overall performance of the battery system. The systematic approach views LIBs as a complex, interconnected system where the properties of each component directly influence the behavior of others and the overall performance, including energy and power density, life cycle, safety, and cost. The work analyzes the key aspects of the approach: the interdependence between the main components of a lithium-ion battery, as well as the features of selecting materials for each component. It is proven that only a multidisciplinary approach that combines chemistry, materials science, and engineering can achieve a synergistic effect and create highly efficient, safe, and reliable battery systems for modern applications.

    Keywords: lithium-ion battery, system approach, electrode materials, degradation, optimization, cathode, LTO, NMC

  • Assessing the consequences of emergency situations at railway infrastructure facilities using UAV data

    This article presents a methodology for assessing damage to railway infrastructure in emergency situations using imagery from unmanned aerial vehicles (UAVs). The study focuses on applying computer vision and machine learning techniques to process high-resolution aerial data for detecting, segmenting, and classifying structural damage.
    Optimized image processing algorithms, including U-Net for segmentation and Canny edge detection, are used to automate analysis. A mathematical model based on linear programming is proposed to optimize the logistics of restoration efforts. Test results show reductions in total cost and delivery time by up to 25% when optimization is applied.
    The paper also explores 3D modeling from UAV imagery using photogrammetry methods (Structure from Motion and Multi-View Stereo), enabling point cloud generation for further damage analysis. Additionally, machine learning models (Random Forest, XGBoost) are employed to predict flight parameters and resource needs under changing environmental and logistical constraints.
    The combination of UAV-based imaging, algorithmic damage assessment, and predictive modeling allows for a faster and more accurate response to natural or man-made disasters affecting railway systems. The presented framework enhances decision-making and contributes to a more efficient and cost-effective restoration process.

    Keywords: UAVs, image processing, LiDAR, 3D models of destroyed objects, emergencies, computer vision, convolutional neural networks, machine learning methods, infrastructure restoration, damage diagnostics, damage assessment

  • Contextual-Diffusion Method for Enriching the TF-IDF Matrix to Enhance Semantic Coherence of Topic Models in News Text Corpora

    The article addresses a significant limitation of the classic TF-IDF method in the context of topic modeling for specialized text corpora. While effective at creating structurally distinct document clusters, TF-IDF often fails to produce semantically coherent topics. This shortcoming stems from its reliance on the bag-of-words model, which ignores semantic relationships between terms, leading to orthogonal and sparse vector representations. This issue is particularly acute in narrow domains where synonyms and semantically related terms are prevalent. To overcome this, the authors propose a novel approach: a contextual-diffusion method for enriching the TF-IDF matrix.
    The core of the proposed method involves an iterative procedure of contextual smoothing based on a directed graph of semantic proximity, built using an asymmetric measure of term co-occurrence. This process effectively redistributes term weights not only to the words present in a document but also to their semantic neighbors, thereby capturing the contextual "halo" of concepts.
    The method was tested on a corpus of news texts from the highly specialized field of atomic energy. A comparative analysis was conducted using a set of clustering and semantic metrics, such as the silhouette coefficient and topic coherence. The results demonstrate that the new approach, while slightly reducing traditional metrics of structural clarity, drastically enhances the thematic coherence and diversity of the extracted topics. This enables a shift from mere statistical clustering towards the identification of semantically integral and interpretable themes, which is crucial for tasks involving the monitoring and analysis of large textual data in specialized domains.

    Keywords: thematic modeling, latent Dirichlet placement, TF-IDF, contextual blurring, semantic proximity, co-occurrence, text vectorization, bag of words model, thematic coherence, natural language processing, silhouette coefficient, text data analysis

  • Practical comparison of some decoders for erasure-correcting codes: speed vs. corrective ability

    The paper is devoted to the search for an effective decoding method for a new class of binary erasure-correcting codes. The codes in question are set by an encoding matrix with restrictions on column weights (MRSt codes). To work with the constructed codes, a decoder based on information aggregates and a decoder based on the belief propagation are used, adapted for the case of erasures. Experiments have been carried out to determine the decoding speed and correcting ability of these methods in relation to the named classes of noise-resistant codes. In the case of MRSt codes, the decoder, based on the principle of spreading trust, significantly benefits in speed compared to the decoder for information aggregates, but loses slightly in terms of corrective ability.

    Keywords: channels with erasure, distributed fault-tolerant data storage systems, code with equal-weight columns, decoder based on information aggregates, decoder based on the belief propagation, RSt code, MRSt code

  • Using a local approach to hierarchical text classification

    The article forms the task of hierarchical classification of texts, describes approaches to hierarchical classification and metrics for evaluating their work, examines in detail the local approach to hierarchical classification, describes different approaches to local hierarchical classification, conducts a series of experiments on training local hierarchical classifiers with various vectorization methods, compares the results of evaluating the work of trained classifiers.

    Keywords: classification, hierarchical classification, local classification, hierarchical presicion, hierarchical recall, hierarchical F-measure, natural language processing, vectorization

  • Synthetic Speech Recognition Algorithm Based on Audio Signal Entropy Calculation

    Modern approaches to synthetic speech recognition are in most cases based on the analysis of specific acoustic, spectral, or linguistic patterns left behind by speech synthesis algorithms. An analysis of open sources has shown that the further development of methods and algorithms for synthetic speech recognition is crucial for providing protection against emerging threats and maintaining trust in existing biometric systems.
    This paper proposes an algorithm for synthetic speech detection based on the calculation of audio signal entropy. The relevance of the work is driven by the increasing number of cases involving the malicious use of synthetic speech, which is becoming almost indistinguishable from genuine human speech. The results demonstrated that the entropy of synthetic speech is significantly higher, and the algorithm is robust to data losses. The advantages of the algorithm are its interpretability and low computational complexity. Experiments were conducted on the CMU ARCTIC dataset using the XTTS v.2 model. The proposed algorithm enables making a decision on the presence of synthetic speech without the need for complex spectral analysis or machine learning methods.

    Keywords: synthetic speech, spoofing, Shannon entropy, speech recognition

  • Synthesis of Kalman Filter for Asymmetric Quadcopter Control with Optimization of Covariance Matrix Ratio

    The work is devoted to the application of a linear Kalman filter (KF) for estimating the roll angle of a quadcopter with structural asymmetry, under which the control input contains a nonzero constant component. This violates the standard assumption of zero mathematical expectation and reduces the efficiency of traditional KF implementations. A filter synthesis method is proposed based on the optimization of the covariance matrices ratio using a criterion that accounts for both the mean square error and the transient response time. The effectiveness of the approach is confirmed by simulation and experimental studies conducted on a setup with an IMU-6050 and an Arduino Nano. The obtained results demonstrated that the proposed Kalman filter provides improved accuracy in estimating the angle and angular velocity, thereby simplifying its tuning for asymmetric dynamic systems.

    Keywords: Kalman filter, quadcopter with asymmetry, optimization of covariance matrices, functional with mean square error and process time, complementary filter, roll and pitch control

  • Construction of a mathematical model and calculation of numerical values of the delayed filtering operator for the L-Markov process

    Разработан алгоритм и составлена программа на языке программирования Python для расчета численных значений оптимального оператора фильтрации с запаздыванием для L-марковского процесса с квазирациональной спектральной плотностью, являющегося обобщением марковского процесса с рациональным спектром. В основе построения оптимального оператора фильтрации с запаздыванием лежит спектральная теория случайных процессов. Расчетная формула оператора фильтрации была получена с использованием теории L-марковских процессов, методов вычисления стохастических интегралов, теории функций комплексного переменного и методов тригонометрической регрессии. Рассмотрен интересный с точки зрения управления сложными стохастическими системами пример L-марковского процесса (сигнала) с квазирациональным спектром. За основу при построении математической модели оптимального оператора фильтрации с запаздыванием была взята тригонометрическая модель. Показано, что значения оператора фильтрации с запаздыванием представляются линейной комбинацией значений принимаемого сигнала в определенные моменты времени и значений синусоидальных и косинусоидальных функций в те же моменты. Установлено, что числовые значения оператора фильтрации существенно зависят от параметра β совместной спектральной плотности принимаемого и передаваемого сигналов, в связи с чем в работе рассматривались три разные задачи прохождения сигнала через разные физические среды. Установлено, что абсолютная величина действительной части оператора фильтрации на всех трех интервалах изменения срока запаздывания и во всех трех средах превышает абсолютную величину мнимой части в среднем в два и более раз. Построены графики зависимости действительных и мнимых частей оператора фильтрации от срока запаздывания τ, а также трехмерные графики зависимости самого оператора фильтрации с запаздыванием от срока запаздывания. Дано физическое обоснование полученным результатам.

    Keywords: random process, L-Markov process, noise, delayed filtering, spectral characteristic, filtering operator, trigonometric trend, standardized approximation error

  • An algorithm for implementing an optimal filtering operator with a prediction based on its synthesized mathematical model for an L-Markov process with a quasi-rational spectrum

    A mathematical model has been constructed, an algorithm has been developed, and a program has been written in the Python programming language for calculating the numerical values of the optimal filtering operator with a forecast for an L-Markov process with a quasi-rational spectrum. The probabilistic model of the filtering operator formula has been obtained based on the spectral analysis of L-Markov processes using methods for calculating stochastic integrals, the theory of analytical functions of a complex variable, and methods for correlation and regression analysis. Considered an example of L-Markov process, the values of the optimal filtering operator with a forecast for which it was possible to express in the form of a linear combination of the values of the process at some moments of time and the sum of numerical values of cosines and sines at the same moments. The basis for obtaining the numerical values of the filtering operator was the mathematical model of trigonometric regression with 16 harmonics, which best approximates the process under study and has a minimum

    Keywords: random process, L-Markov process, prediction filtering, spectral characteristics, filtering operator

  • Application of the Residue Number System in Text Information Processing

    The article explores the application of the residue number system in text information processing. The residue number system, based on the principles of modular arithmetic, represents numbers as sets of residues relative to pairwise coprime moduli. This approach enables parallel computation, potential data compression, and increased noise immunity. The study addresses issues such as character encoding, parallel information processing, error detection and correction, computational advantages in implementing polynomial hash functions, as well as practical limitations of the residue number system.

    Keywords: residue number system, modular arithmetic, text processing, parallel computing, data compression, noise immunity, Chinese remainder theorem, polynomial hashing, error correction, computational linguistics

  • Development of a volumetric display for information and communication interaction in the Arctic zone

    The article describes the process of developing a volumetric display for information and communication interaction in the Arctic, where traditional means of visualization and communication face the challenges of extreme climate, isolation and limited infrastructure. An analysis of the main areas of using volumetric in the Arctic zone is carried out. The main disadvantages of methods for creating a volumetric image in existing 3D displays are considered. Taking into account the main tasks to be solved - creating the illusion of a three-dimensional object for a group of people (more than 2 people) at a wide viewing angle - a description and analysis of two main developed configurations of the optical system is given, the latter of which meets the requirements, ensuring stable operation in Arctic conditions and opening up prospects for implementation in remote and hard-to-reach regions of the Far North.

    Keywords: volume display, arctic zone, 3D image, system analysis, lens, optical system, computer modeling

  • Application of ontological modeling for automatic selection of significant features and semantic regularization of machine learning models for the development of intelligent information systems in the power industry

    Ontological modeling is a promising direction in the development of the scientific and methodological base for developing intelligent information systems in the power industry. The article proposes a new approach to using ontological models in creating artificial intelligence systems for forecasting time series in electrical engineering problems. Formal metrics are introduced: the ontological distance between a feature and a target variable, as well as the semantic relevance of a feature. Using examples of domain ontologies for wind energy and electricity consumption of an industrial enterprise, algorithms for calculating these metrics are demonstrated and it is shown how they allow ranking features, implementing an automated selection of the most significant features, and providing semantic regularization of training regression models of various types. Recommendations are given for choosing coefficients for calculating metrics, an analysis of the theoretical properties of metrics is carried out, and the applicability limits of the proposed approach are outlined. The results obtained form the basis for further integration of ontological information into mathematical and computer models for forecasting electricity generation and consumption in the development of industry intelligent systems.

    Keywords: ontology, ontological distance, feature relevance, systems analysis, explainable artificial intelligence, power industry, generation forecasting, electricity consumption forecasting

  • Analysis of Deep Neural Networks for Human Detection on the Ground from Quadcopter Flight Altitude

    In the modern world, when technology is developing at an incredible rate, computers have gained the ability to "see" and perceive the world around them like a human. This has led to a revolution in visual data analysis and processing. One of the key achievements was the use of computer vision to search for objects in photographs and videos. Thanks to these technologies, it is possible not only to find objects such as people, cars or animals, but also to accurately indicate their position using bounding boxes or masks for segmentation. This article discusses in detail modern models of deep neural networks used to detect humans in images and videos taken from a height and a long distance against a complex background. The architectures of the Faster Region-based Convolutional Neural Network (Faster R-CNN), Mask Region-based Convolutional Neural Network (Mask R-CNN), Single Shot Detector (SSD) and You Only Look Once (YOLO) are analyzed, their accuracy, speed and ability to effectively detect objects in conditions of a heterogeneous background are compared. Special attention is paid to studying the features of each model in specific practical situations, where both high-quality target object detection and image processing speed are important.

    Keywords: machine learning, artificial intelligence, deep learning, convolutional neural networks, human detection, computer vision, object detection, image processing

  • Synthesis of a non-stationary automatic braking control system for vehicle wheels

    The paper considers the synthesis of a non-stationary automatic control system for braking the wheels of a heavy vehicle using the generalized Galerkin method. The research method under consideration is used to solve the problem of synthesizing a non-stationary system whose desired program motion is specified at the output of a nonlinear element. The paper presents the results of studying the impact of non-stationarity on the parameters of the fixed part of the system (object) on the deterioration of the quality of the transient process. For critical operating conditions, the parameters of the controller were recalculated, and the results of accounting for non-stationarity and re-synthesis were evaluated.

    Keywords: automatic control system, regulator, braking system, unsteadiness of parameters, generalized Galerkin method

  • Issledovaniye effektivnosti kodov Rida-Solomona v prakticheskoy realizatsii s ispol'zovaniyem programmnoy sredy MATLAB

    This study analyzes the performance of Reed-Solomon codes (RS codes) using the MATLAB software environment. RS codes are selected as a class of error-correcting codes characterized by high performance under multiple burst errors, which makes them widely applicable in areas such as digital television, data storage (CD/DVD, flash memory) and wireless communication. The paper demonstrates and evaluates the performance of RS codes in practice through their simulation in MATLAB. The study covers the creation of simulation models for encoding, error insertion and decoding data using RS algorithms in MATLAB. The performance of the codes is evaluated by calculating the bit error rate (BER) and other relevant metrics. The influence of key parameters of RS codes (e.g., codeword length, number of check symbols) on their error-correcting ability is analyzed. The results of the study are intended to clearly show how RS codes cope with different types of errors and how their performance can be optimized by tuning the parameters. The work highlights the importance of MATLAB as a tool for developing, testing and optimizing coding systems, providing practical tools for researchers and engineers.

    Keywords: Reed-Solomon codes, MATLAB, error correction, simulation, performance, error probability, communication systems, data storage

  • Combined Method for Summarizing Russian-Language Texts

    This article presents the development of a combined method for summarizing Russian-language texts, integrating extractive and abstractive approaches to overcome the limitations of existing methods. The proposed method is preceded by the following stages: text preprocessing, comprehensive linguistic analysis using RuBERT, and semantic similarity-based clustering. The method involves extractive summarization via the TextRank algorithm and abstractive refinement using the RuT5 neural network model. Experiments conducted on the Gazeta.Ru news corpus confirmed the method's superiority in terms of precision, recall, F-score, and ROUGE metrics. The results demonstrated the superiority of the combined approach over purely extractive methods (such as TF-IDF and statistical methods) and abstractive methods (such as RuT5 and mBART).

    Keywords: combined method, summarization, Russian-language texts, TextRank, RuT5

  • Projective parameters identification of a DC motor with independent excitation an adaptive mathematical model

    The article considers the parameter identification issues of linear non-stationary dynamic systems adaptive models using the example of a linearized adjustable model of a DC motor with independent excitation. A new method for estimating the parameters of adjustable models from a small number of observations is developed based on projection identification and the apparatus of linear algebra and analytical geometry. To evaluate the developed identification method, a comparison of the transient processes of the adaptive model of a DC motor with independent excitation with the obtained parameter estimates with reference characteristics was carried out. The efficiency of the proposed identification method in problems of DC electric drive control is shown.

    Keywords: DC motor, projection identification, dynamic system parameter estimation, adaptive model of non-stationary dynamic system

  • Forecast of the grade of manufactured products in small-tonnage non-stationary multi-product production of polymer products

    Modern computer systems for controlling chemical-technological processes make it possible to programmatically implement complex control algorithms, including using machine learning methods and elements of artificial intelligence. Such algorithms can be applied, among other things, to complex non-stationary multi-product and flexible discrete productions, which include such low-tonnage chemical processes as the production of polymeric materials. The article discusses the production of fluoroplastic in batch reactors. This process occurs under constantly changing parameters such as pressure and temperature. One of the important tasks of the control system is to stabilize the quality of the produced polymer, and for these purposes it is important to predict this quality during the production process before the release of fluoroplastic. The quality of the product, in turn, strongly depends on both the quality of the initial reagents and the actions of the operator. In non-stationary process conditions, typical virtual quality analyzers based on regression dependencies show poor results and are not applicable. The article proposes the architecture of a virtual quality analyzer based on mathematical forecasting methods using such algorithms as: random forest method, gradient boosting, etc.

    Keywords: polymerization, multi-product manufacturing, low-tonnage chemistry, quality forecasting, machine learning