The article examines the problem of global network optimization, as well as currently existing software and hardware solutions. The purpose of the study is to determine the technological basis for developing a prototype of a domestic WAN optimizer. When studying the subject area, it turned out that there are no domestic solutions in this area that are freely available. The resulting solution can be adapted to the specific requirements of the customer company by adding the necessary modifications to the prototype.
Keywords: global network, data deduplication, WAN optimizer, bandwidth
The rapid electrification of transport and energy systems imposes extreme and often conflicting requirements on the performance of lithium-ion batteries. The classical paradigm of step-by-step optimization of individual components (materials and designs) has reached its limits, facing the challenge of negative synergistic effects. Despite the availability of advanced methods, ranging from detailed physical and chemical models to machine learning algorithms, the field of energy storage system design remains fragmented. This article provides a critical analysis of three isolated domains: the empirical-synthetic approach, physical and mathematical modeling, and software methods. Systemic shortcomings have been identified, including the lack of end-to-end methodologies, the "black box" problem of ML solutions, extreme requirements for data and computational resources, and limited portability of solutions. The concept of a hybrid predictive platform is proposed, which purposefully integrates fast regression models for deterministic parameters and specialized neural networks for predicting complex nonlinear degradation processes. This integration allows for the consideration of a battery cell as a single entity, optimizing the trade-offs between key characteristics (capacity, power, lifespan, and safety) during the virtual design phase, resulting in reduced time and cost.
Keywords: energy storage systems, system approach, electrode materials, optimization, system design, machine learning, hybrid models, degradation prediction, and performance optimization
The increasing complexity of cyberattacks, often involving multiple vectors and aimed at achieving various goals, necessitates advanced modeling techniques to understand and predict attacker behavior. This paper proposes a formal approach to describe such attacks using a weakly connected oriented tree model that satisfies specific conditions. The model is designed to represent the attack surface and a collection of attack vectors, allowing for the analysis of possible attack scenarios. We introduce a sequential composition operation that combines sets of attack vectors, enabling the modeling of combined attacks. The study includes an example of an attack on an information system through a vulnerability that allows brute-force password guessing and phishing emails, with the goals of either obtaining a database or causing a denial of service. We investigate the set of attack scenarios generated by the model and formulate a rule for estimating the number of possible scenarios for an arbitrary number of attack vector sets. The proposed method facilitates preliminary analysis of attack scenarios, aiding cybersecurity professionals in making informed decisions about implementing additional defense mechanisms at various stages of an attack. The results demonstrate the applicability of the model for evaluating attack scenarios and provide a foundation for further research into more complex attack structures.
Keywords: attack modeling, information security, attack trajectory, attack scenario, attack vector, cybersecurity
This study addresses the technical bottlenecks of generative AI in architectural style control by proposing a nodular workflow construction method based on ComfyUI (A graphical user interface for working with the Stable Diffusion model, simplifying the management of image generation parameters.), aiming to achieve precise and controllable generation of functionalist architectural renderings. Through deconstructing the technical characteristics of the Stable Diffusion (A generative AI model based on diffusion processes that transforms noise into images through iterative noise removal.) model, neural network components such as ControlNet (A neural network architecture used for precise control of image generation via additional input data.) edge constraints and LoRA(Low-Rank Adaptation. A method for fine - tuning neural networks using low - rank matrices, enabling modification of large models with minimal computational costs.) module enhancements are encapsulated into visual nodes, establishing a three-phase generation path of "case analysis - parameter encoding - dynamic adjustment". Experiments involving 10 classical functionalist architectural cases employed orthogonal experimental methods to validate node combination strategies, revealing that the optimal workflow incorporating MLSD (Multi-Level Semantic Diffusion. An algorithm that combines semantic segmentation and diffusion models to generate structurally consistent images.) straight-line detection and LoRA prefabricated component reinforcement significantly improves architectural style transfer effectiveness. The research demonstrates: 1) The nodular system overcomes the "black box" limitations of traditional AI tools by exposing latent space(A multi - dimensional space where neural networks encode semantic features of data.) parameters, enabling architects to directionally configure professional elements; 2) Workflow templates support rapid recombination within 4 nodes, enhancing cross-project adaptability while further compressing single-image generation time; 3) Strict architectural typology matching (e.g., residential-to-residential, office-to-office) is critical for successful style transfer, as typological deviations cause structural logic error rates to surge. This research holds significant implications in architectural design by leveraging ComfyUI to develop workflows that transform how architects visualize and communicate ideas, thereby improving project outcomes. It demonstrates practical applications of this technology, proving its potential to accelerate design processes and expand architects' creative possibilities.
Keywords: comfyui, functionalist architecture, style transfer, node-based workflow, artificial intelligence, architectural design, generative design
The article investigates the formation of professional competencies among students of a technical university (from bachelors to postgraduates) in the context of the digital transformation of higher education. The necessity of transitioning from traditional pedagogical models to a methodology integrating digital educational technologies and psychological-pedagogical support is substantiated. Central attention is paid to the potential of the digital environment for developing practice-oriented skills, critical thinking, and academic autonomy. The results of a pedagogical experiment are presented, within which a blended learning model based on reciprocal interaction and content co-creation was tested: students acted as knowledge constructors, developing criterion-referenced test tasks, while instructors performed the function of metacognitive experts. It is shown that the implementation of this model leads to a statistically significant increase in the level of professional and meta-professional competencies. The conclusion is made that digital tools, used as a platform for co-authorship, serve as a meaning-forming framework of pedagogical design, contributing to the training of a new type of engineers – reflective practitioners and co-authors of the educational process.
Keywords: construction and technical forensic examination, special knowledge, forensic expert, examination production, examination procedure
The article is devoted to the study of the problem of estimating unknown parameters of linear regression models using the least absolute deviations method. Two well-known approaches to identifying regression models are considered: the first is based on solving a linear programming problem; the second, known as the iterative least-squares method, allows one to obtain an approximate solution to the problem. To test this method, a special program was developed using the Gretl software package. A dataset of house prices and factors influencing them, consisting of 20640 observations, was used for computational experiments. The best results were obtained using the quantreg function built into Gretl, which implements the Frisch-Newton algorithm; the second result was obtained using an iterative method; and the third result was achieved by solving a linear program using the LPSolve software package.
Keywords: regression analysis, least absolute deviations method, linear programming, iterative least squares method, variational weighted quadratic approximation method
The article presents aspects of designing an artificial intelligence module for analyzing video streams from surveillance cameras in order to classify objects and interpret their actions as part of the task of collecting statistical information and recording information about abnormal activity of surveillance objects. A diagram of the sequence of the user's process with active monitoring using a Telegram bot and a conceptual diagram of the interaction of the information and analytical system of a pedigree dog kennel on the platform "1С:Enterprise" with external services.
Keywords: computer vision, machine learning, neural networks, artificial intelligence, action recognition, object classification, YOLO, LSTM model, behavioral patterns, keyword search, 1C:Enterprise, Telegram bot
A set of techniques for obtaining retrospective, statistical, expert information, data integration, competence deficit assessment and knowledge management to compensate for competence deficit in organisational systems is presented. For the purpose of practical implementation of an integrated approach to improving the management of organisational systems, a model and an algorithm for obtaining data by applying a set of techniques have been developed. In the future, the proposed methodological solutions will significantly improve the efficiency of organisational systems management through the rational application of automated management systems with components of trusted artificial intelligence.
Keywords: algorithm, critical events, integration, information resources, recommendations, systematisation, efficiency
The article discusses the issues of investigating the efficiency of separation of droplet moisture of a working fluid from a vertically ascending gas-liquid stream in intensive wet dust and gas purification devices, depending on the relative gas content, the velocity of the gas-liquid stream and the design characteristics of a radial inertial louver separator, which is part of the gas purification equipment. Within the investigated range of hydraulic parameters of the separation mode, optimal ratios have been established between the key design parameters of the proposed separation device to implement an effective drip extraction process.
Keywords: intensive wet dust and gas cleaning devices, radial inertia louver separator, gas-liquid flow, efficiency of droplet collection
With the rapid growth of information on the internet, the accumulation of large databases, and the constant influx of data from various sensors and intelligent systems, it is becoming increasingly difficult for users to find what they are really looking for. Therefore, the development of automatic summarization methods is considered a crucial task in natural language processing. These needs have motivated the development of various methods and approaches for extracting semantic and semantic information from documents, classifying it, and systematizing it. This article develops the architecture of a hybrid-syntactic fuzzy system for extracting semantic features from text and presents its mathematical formalization. The author's method enables a transition from empirical assessments of word importance to a rigorous formalized calculation of their semantic weight.
Keywords: semantics, sentence, extraction, fuzzy logic, comparison, data
The article presents a comparative analysis of the results of verification calculations of a monolithic reinforced concrete floor slab and full-scale tests, and also formulates the main factors affecting the accuracy of determining the bearing capacity of monolithic reinforced concrete structures when performing verification calculations. In accordance with regulatory documentation, the bearing capacity of building structures is determined by performing verification calculations according to current design standards and conducting full-scale tests (if necessary) to confirm the bearing capacity established by the calculations. In modern practice of technical inspection, full-scale tests are extremely rare due to their high labor intensity and the complexity of interpreting the results. However, it is full-scale tests that allow the most accurate determination of the bearing capacity of a structure. This makes it possible to correctly assess the technical condition, optimize the development of strengthening projects due to a reasonable choice of method, accurate calculation of the required volumes of work and, as a result, minimize material costs.
Keywords: bearing capacity, full-scale tests, verification calculation, reinforced concrete, floor, deflection, limit state groups, technical inspection
The article discusses machine learning methods, their application areas, limitations and application possibilities. Additionally highlighted achievements in deep learning, which allow obtaining accurate results with optimal time and effort. The promising architecture of neural networks of transformers is also described in detail. As an alternative approach, it is proposed to use a generative adversarial network in the process of converting a scan into elements of a digital information model.
Keywords: scanning, point cloud, information model, construction, objects, representation, neural network, machine learning
The article formulates the task of developing a fire safety management procedure based on a risk-based approach, taking into account the preferences of the decision maker in the multi–criteria system "result - cost – time". A multi-criteria mathematical model of the procedure under consideration has been developed, as well as an algorithm for its implementation with the development of information technology and a test case.
Keywords: risk-based approach, fire risk, decision support systems, Pareto-optimal solutions
This paper describes a virtualized environment designed to conduct comprehensive experiments involving peer-to-peer networks and information security algorithms. The architecture is based on integrating the VMware hypervisor with the EVE-NG network device emulation platform, providing flexible resource allocation and realistic topology simulation. A MikroTik router serves as the central node, enabling a “star-shaped” scheme of interaction among virtual machines running various operating systems (Windows 7, 10, 11, Linux Debian). The chosen configuration simplifies testing the multiple initial connections and multi-level cryptography algorithms, ensures stable routing, and supports further automation of software installation using Bash or PowerShell scripts.
Keywords: information security, virtualized environment, multiple initial connections, peer-to-peer network, virtual private network
Introduction: Mobile Gaming Addiction (MGA) has emerged as a significant public health concern, with the World Health Organization recognizing it as a gaming disorder. Russia, with its growing mobile gaming market, is no exception. Aims and Objectives: This study aims to explore the feasibility of using neural networks for early MGA detection and intervention, with a focus on the Russian context. The primary objective is to develop and evaluate a neural network-based model for identifying behavioral patterns associated with MGA. Methods: A proof of concept study was conducted, employing a simplified neural network architecture and a dataset of 101 observations. The model's performance was evaluated using standard metrics, including accuracy, precision, recall, F1-score, and AUC-ROC score. Results: The study demonstrated the potential of neural networks in detecting MGA, achieving an F1-score of 0.75. However, the relatively low AUC-ROC score (0.58) highlights the need for addressing dataset limitations. Conclusion: This study contributes to the growing body of literature on MGA, emphasizing the importance of considering regional nuances and addressing dataset limitations. The findings suggest promising avenues for future research, including dataset expansion, advanced neural architectures, and region-specific mobile applications.
Keywords: neural networks, neural network architectures, autoencoder, digital addiction, gaming addiction, digital technologies, machine learning, artificial intelligence, mobile game addiction, gaming disorder