The article discusses modern methods for protecting bank customers' personal information based on differential anonymization of data using trusted neural networks. It provides an overview of the regulatory framework, analyzes technological approaches and describes a developed multi-level anonymization model that combines cryptographic and machine learning techniques. Special attention is paid to balancing between preserving data utility and minimizing the risk of customer identity disclosure.
Keywords: differential anonymization, trusted neural network, personal data, banking technologies, information security, cybersecurity
The article discusses current threats and vulnerabilities of telephone subscribers in the context of mass digitalization, the development of artificial intelligence and machine learning technologies, and their use in fraudulent scenarios. The study analyzes the main vulnerability factors and provides statistical data on telephone fraud incidents in Russia and abroad. Special attention is given to the phenomena of trust in authority, insufficient digital literacy, and the use of voice synthesis and deepfake technologies for social engineering attacks.
Keywords: social engineering, fraud, vishing, deepfake, artificial intelligence, digital literacy, information security
Electrocardiogram (ECG)-based biometric authentication systems offer intrinsic resistance to spoofing due to their physiological uniqueness. However, their performance in dynamic real-world settings, such as wearable devices or stress-induced conditions, is often compromised by noise, electrode displacement, and intra-subject variability. This study proposes a novel hybrid framework that enhances robustness, ensuring high authentication accuracy and reliability in adverse conditions, through integrated wavelet-based signal processing for noise suppression and a deep-learning classifier for adaptive feature recognition. The system employs preprocessing, QRS complex detection, distance–deviation modeling, a statistical comparison method that quantifies morphological similarity between ECG templates by analyzing amplitude and shape deviations and an averaging-threshold mechanism, combined with a feedforward Multi-Layer Perceptron (MLP) neural network for classification. The MLP is trained on extracted ECG features to capture complex nonlinear relationships between waveform morphology and user identity, ensuring adaptability to variable signal conditions. Experimental validation on the ECG-ID dataset achieved 98.8% accuracy, 95% sensitivity, an Area Under the Curve (AUC) of 0.98, and a low false acceptance rate, outperforming typical wearable ECG authentication systems that report accuracies between 90% and 95%. With an average processing time of 8 seconds, the proposed method supports near real-time biometric verification suitable for healthcare information systems, telehealth platforms, and IoT-based access control. These findings establish a scalable, adaptive, and noise-resilient foundation for next-generation physiological biometric authentication in real-world environments
Keywords: electrocardiogram biometrics, wavelet decomposition, QRS complex detection, feedforward neural network, deep learning classification, noise-resilient authentication, biometric security
Information technologies have become increasingly used in various fields, be it document management or payment systems. One of the most popular and promising technologies is cryptocurrency. Since they require ensuring the security and reliability of data in the system, most of them use blockchain and complex cryptographic protocols, such as zero-knowledge proof protocols (ZKP). Therefore, an important aspect for achieving the security of these systems is verification, since it can be used to assess the system's resistance to various attacks, as well as its compliance with security requirements. This paper will consider both the concept of verification itself and the methods for its implementation. A comparison of methods for identifying a proof suitable for zero-knowledge protocols is also carried out. And as a result, a conclusion is made that an integrated approach to verification is needed, since choosing only one method cannot cover all potential vulnerabilities. In this regard, it is necessary to apply various verification methods at various stages of system design.
Keywords: cryptocurrency, blockchain, verification, formal method, static analysis, dynamic method, zero-knowledge proof protocol
The article provides a reasonable definition of an intelligent digital twin of an information security protection object and identifies the main stages of its development. The article also develops set-theoretic models of the protection object and the intelligent digital twin, which allow for the identification of their identical components and distinctive features that determine the mechanism for countering threats. Based on the provisions of the conflict theory, the relationship between the protected object and the threat was identified in the absence of an intelligent digital twin, as well as in the presence of an intelligent digital twin in the system of protecting the object from information security threats. The obtained macro-dynamic models of the considered situations allow us to justify the feasibility of implementing a mechanism for protecting the object from information security threats based on the use of its intelligent digital twin and to assess the overall effect of its application.
Keywords: information security, object of protection, intelligent digital twin, threat, set-theoretic model, conflict theory, macrodynamic model
This paper proposes a novel model of computer network behavior that incorporates weighted multi-label dependencies to identify rare anomalous events. The model accounts for multi-label dependencies not previously encountered in the source data, enabling a "preemptive" assessment of their potential destructive impact on the network. An algorithm for calculating the potential damage from the realization of a multi-label dependency is presented. The proposed model is applicable for analyzing a broad spectrum of rare events in information security and for developing new methods and algorithms for information protection based on multi-label patterns. The approach allows for fine-tuning the parameters of multi-label dependency accounting within the model, depending on the specific goals and operating conditions of the computer network.
Keywords: multi-label classification, multi-label dependency, attribute space, computer attacks, information security, network traffic classification, attack detection, attribute informativeness, model, rare anomalous events, anomalous events
The article discusses the development of a method for protecting confidential images in instant messengers based on masking with orthogonal matrices. The vulnerability of the system to brute-force attacks and account compromise is analyzed. The main focus is on the development of an architecture for analyzing abnormal activity and adaptive authentication. The article presents a system structure with independent security components that provide blocking based on brute-force attacks and flexible session management. The interaction of the modules within a unified security system is described, with the distribution of functions between server and client components.
Keywords: information security, messenger, messaging, communications, instant messaging systems, security audits, and brute-force attacks
The aim of this study is to analyze methods for protecting radio channels from intentional interference by managing wireless channel resources, with an emphasis on identifying key challenges and directions for further research in this area. The primary method applied is the ontological approach of knowledge engineering. The work collects and systematizes the main approaches to counteracting jamming of communication channels and analyzes studies aimed at formalizing radio network problems for the purpose of modeling and analysis. The results made it possible to determine relevant development directions, identify existing gaps, formulate requirements for the model under development, and justify the choice of methods to be used in subsequent research.
Keywords: FHSS, interference, radio channel, radio communication, telecommunications, jamming, network, modeling, communication, mitigation, security
The relevance of this article stems from the need to develop lightweight and scalable solutions for decentralized systems (blockchain, IoT), where traditional cryptographic methods are inefficient or excessive. A theoretical-practical method for protecting unmanned transportation systems against Sybil attacks has been developed, based on a server robot’s analysis of each client robot’s unique directional electromagnetic signal power map signature. Experimental solutions for Sybil attack protection are demonstrated using two aerial servers deployed on quadcopters. The proposed keyless Sybil attack defense method utilizes WiFi signal parameter analysis (e.g., power scattering and variable antenna radiation patterns) to detect spoofed client robots. Experiments confirm that monitoring unique radio channel characteristics effectively limits signature forgery. This physical-layer approach is also applicable to detecting packet injection in robot Wi-Fi networks. The key advantages of the developed method include the elimination of cryptography, reducing computational overhead; the use of physical signal parameters as a "fingerprint" for legitimate devices; and the method's scalability to counter other threats, such as traffic injection.
Keywords: protection against Sybil attacks, unmanned vehicle systems, electromagnetic signal power map, WiFi signal, signature falsification, spoofing, and synthetic aperture radar
Methods of increasing the efficiency of data analysis based on topology and analytical geometry are becoming increasingly popular in modern information systems. However, due to the high degree of complexity of topological structures, the solution of the main tasks of processing and storing information is provided by spatial geometry in combination with modular arithmetic and analytical assignment of geometric structures, the description of which is involved in the development of new methods for solving optimization problems. The practical application of elliptic cryptography, including in network protocols, is based on the use of interpolation methods for approximating graphs of functions, since a loss of accuracy may occur when performing many sequential mathematical operations. This problem is related to the features of the computing architecture of modern devices. It is known that an error can have a cumulative effect, so data approximation methods must be used sequentially as calculations are performed.
Keywords: elliptic curve, information system, data analysis, discrete logarithm, point order, scalar, subexponential algorithm
This paper is devoted to the theoretical analysis and comparative characteristics of methods and algorithms for automatic identity verification based on the dynamic characteristics of a handwritten signature. The processes of collecting and preprocessing dynamic characteristics are considered. An analysis of classical methods, including hidden Markov models, support vector machines, and modern neural network architectures, including recurrent, convolutional, and Siamese neural networks, is conducted. The advantages of using Siamese neural networks in verification tasks under the condition of a small volume of training data are highlighted. Key metrics for assessing the quality of biometric systems are defined. The advantages and disadvantages of the considered methods are summarized, and promising areas of research are outlined.
Keywords: verification, signature, machine learning, dynamic characteristic, hidden Markov models, support vector machine, neural network approach, recurrent neural networks, convolutional neural networks, siamese neural networks, type I error
The article describes a program that reminds users to change their account password in a timely manner, in order to comply with information security requirements and prevent "hacks" and network attacks. The program is developed using a virtual machine (Virtual Box), followed by the installation of the Linux Mint operating system. The software is written in Python programming language, using libraries such as: notify2 (a package for displaying desktop notifications in Linux), schedule (a library for scheduling regular tasks), and other libraries. During the development of Python software for timely password change of the user, key functions were implemented that ensure security and convenience of work: checking the validity of the password, notifying the user.
Keywords: Cyber hygiene, password protection, cybersecurity, programming language, Astra Linux, operating system, graphic editor, software product, program, user account, information security, virtual machine
The article provides an overview of the current state of password authentication and highlights the main problems. Various options for password-free authentication are being considered as a replacement for password authentication. Each option is analyzed in terms of disadvantages and the possibility of replacing passwords.
The analysis revealed that some alternatives can only act as an additional factor in multi-factor authentication, such as OTP and push notifications. Others, on the contrary, should not be used as an authentication method at all; these include QR codes.
As a result of the analysis, two directions of password-free authentication were identified as clear favorites: biometric and passkey. When comparing the finalists, the choice fell on passkey, since it does not have the main and critical drawback of biometric authentication - dependence on concealing the originals of biometrics. In case of biometrics compromise, a person gets huge problems, since without surgical intervention he cannot change it.
Passkey, on the contrary, demonstrates a high level of protection, comparable to biometrics, but is devoid of such a drawback. At the same time, passkey, or rather the current FIDO2 standard, has a few shortcomings that hinder distribution. These include the potential possibility of using malware as a client. Another, no less important problem is unlinking the old and linking a new key in case of loss or failure of the first one.
To solve this problem, it is necessary to develop a secure authentication protocol using passkey technology.
Keywords: password authentication, passwordless authentication, push notification, QR-code, biometric authentication, passkey, FIDO2, WebAuthn, CTAP2.1
The paper proposes a method to counteract unauthorized privilege escalation in the Android operating system. The proposed method involves using the ARM architecture’s hardware virtualization technology to control access to to the operating system’s kernel data structures that store task identification information.
Keywords: information security, privilege escalation, Android, hypervisor
The paper deals with a problem of assessing the level of security of critical information infrastructure objects in the financial sector based on organizational structure and management factors in the context of internal audit. Standards do not allow flexible assessment of indicators characterizing information security requirements and propose to obtain expert assessments based on subjectively selected elements (documents, facts) related to certain requirements. The article considers a Bayesian approach to assessing the values of private indicators for all available characteristics of information security requirements, which allows obtaining them on a continuous scale. A corresponding model is presented that includes the calculation of private and generalized indicator values. It improves the approach to assessing the level of security of critical information infrastructure objects during internal audit, as defined by standards, from the point of view of assessing private indicator values on a continuous scale and taking into account the influence of the history of changes in the characteristics of information security requirements.
Keywords: information security, Bayesian approach, critical information infrastructure objects, indicators of compliance with information security requirements, level of protection of objects, model with probabilistic components
The article provides a brief analysis of information security measures, which allowed us to substantiate the leading role of technical measures for protecting elements of computer systems, digital systems, cellular communication systems, and users of these systems in modern conditions. The analysis of the growth of cybercrime indicators in Russia revealed the obsolescence of the existing comprehensive approach to protecting elements of computer systems, digital systems, cellular communication systems, and users of these systems, and determined the necessity, timeliness, and relevance of creating and using an information security ecosystem. An analysis of existing single solutions for creating and using information security ecosystems revealed the need to use intelligent digital twins of protected objects to neutralize information security threats. Based on this analysis, the features of implementing an information security ecosystem using intelligent digital twins of computer systems, digital systems, cellular communication systems, and users of these systems have been identified.
Keywords: information security ecosystem, intelligent digital twin, information security threat, vulnerability analysis, threat monitoring and detection, and attack protection and prevention
The paper presents a methodology that includes stages of task performance control, data collection and analysis, determination of reliability and efficiency criteria, reasonable selection, communication, implementation and control of the results of management decisions. A cyclic algorithm for comprehensive verification of compliance with the reliability and efficiency criteria of the system has been developed, allowing for prompt response to changes, increased system stability and adaptation to adverse environmental impacts. Improved mathematical formulas for assessing the state of organizational systems are proposed, including calculation of the readiness factor, level of planned task performance and compliance with established requirements. The application of the methodology is aimed at increasing the validity of decisions made while reducing the time for decision-making, as well as ensuring the relevance, completeness and reliability of information in information resources in the interests of sustainable development of organizational systems.
Keywords: algorithms, time, control, reliability and efficiency criteria, indicators, resources, management decisions, cyclicity
This article examines the growing threat of web scraping (parsing) as a form of automated cyberattack, particularly aimed. Although scraping publicly available data is often legal, its misuse can lead to serious consequences, including server overload, data breaches and intellectual property infringement. Recent court cases against OpenAI and ChatGPT highlight the legal uncertainty associated with unauthorized data collection.
The study presents a dual approach to combat malicious scraping. Traffic Classification Model - a machine learning based solution using Random Forest algorithms results in performance that achieves 89% accuracy in distinguishing between legitimate and malicious bot traffic, enabling early detection of scraping attempts. Data Deception Technique - the countermeasure dynamically modifies HTML content to convey false information to scrapers while maintaining the original look of the page. This technique prevents data collection without affecting the user experience.
Performance results include real-time traffic monitoring, dynamic page obfuscation, and automatic response systems.
The proposed system demonstrates effectiveness in mitigating the risks associated with scraping and emphasizes the need for adaptive cybersecurity measures in evolving digital technologies.
Keywords: parsing, automated attacks, data protection, bot detection, traffic classification, machine learning, attack analysis, data spoofing, web security
The article analyzes various approaches to the generation and detection of audio deepfakes. Particular attention is paid to the preprocessing of acoustic signals, extraction of voice signal parameters, and data classification. The study examines three groups of classifiers: Support Vector Machines (SVM), K-Nearest Neighbors (KNN), and neural networks. For each group, effective methods were identified, and the most successful approaches were determined based on a comprehensive analysis. The study revealed two approaches demonstrating high accuracy and reliability: a detector based on temporal convolutional networks analyzing MFCC-cepstrogram achieved an EER metric of 0.07%, while the Support Vector Machine with a radial basis function kernel reached an EER of 0.5%. Additionally, the latter method demonstrated the following metrics on the ASVspoof 2021 dataset: Accuracy = 99.6%, F1-score = 0.997, Precision = 0.998, and Recall = 0.994.
Keywords: audio deepfakes, preprocessing of acoustic signals, support vector machine, k-nearest neighbors, neural networks, temporal convolutional networks, deepfake detection
This paper is devoted to the theoretical analysis of the methods used in verifying the dynamics of a signature obtained from a graphic tablet. A classification of three fundamental approaches to solving this problem is carried out: matching with a standard; stochastic modeling and discriminative classification. Each approach in this paper is considered using a specific method as an example: dynamic transformation of the time scale; hidden Markov models; support vector machine. For each method, the theoretical foundations are disclosed, the mathematical apparatus is presented, the main advantages and disadvantages are identified. The results of the comparative analysis can be used as the necessary theoretical basis for developing modern signature dynamics verification systems.
Keywords: verification, biometric authentication, signature dynamics, graphic tablet, classification of methods, matching with a standard, stochastic modeling, discriminative classification, hidden Markov models, dynamic transformation of the time scale
The article presents the results of a study on the effectiveness of the hashing algorithms Argon2, Scrypt, and Bcrypt in the context of developing web applications with user registration and authentication features. The main focus of this research is on analyzing the algorithms' resilience to brute-force attacks, hardware attacks (GPU/ASIC), as well as evaluating their computational performance. The results of the experiments demonstrate the advantages of Scrypt in terms of balancing execution time and security. Recommendations for selecting algorithms based on security and performance requirements are also provided.
Keywords: hashing algorithm, user registration interface, user authentication interface, privacy protection
The purpose of the article is to review the criteria that affect the functionality of the platform to deceive attackers, to identify the strengths and weaknesses of the technology, to consider current trends and areas for further research. The method of study is analysis of existing articles in peer-reviewed Russian and foreign sources, aggregation of research, and formation of conclusions based on the analyzed sources. The article discusses basic and situational metrics to consider when selecting and evaluating a trap - cost of implementation, design complexity, risk of compromise, data collected, strength of deception, available connections, false positive rate, attack attribution, attack complexity, time to compromise, diversity of interactions, early warning, effectiveness of attack repellency, impact on attacker behavior, threats detected by the trap, resilience. A breakdown of the strengths and weaknesses of Deception technology, which are worth paying attention to when using it. Deception platform development trends are reviewed, as well as areas of research in which the platform is under-researched.
Keywords: false target infrastructure, deception platform, honeypot, honeytoken, honeynet
The purpose of the article: to determine the possibility of using file hash analysis using artificial neural networks to detect exploits in files. Research method: the search for exploits in files is carried out based on the analysis of Windows registry file hashes obtained by two hashing algorithms SHA-256 and SHA-512, using three types of artificial neural networks (direct propagation, recurrent, convolutional). The obtained result: the use of artificial neural networks in file hash analysis allows us to identify exploits or malicious records in files; the performance (accuracy) of artificial neural networks of direct propagation and with recurrent architecture are comparable to each other and are much more productive than convolutional neural networks; the longer the length of the file hash, the more reliably it is to identify an exploit in the file
Keywords: malware, exploit, neural networks, hashing, modeling
This study addresses the challenges of evaluating feature space dimensionality in the context of multi-label classification of cyber attacks. The research focuses on tabular data representations collected through a hardware-software simulation platform designed to emulate multi-label cyber attack scenarios. We investigate how multi-label dependencies — manifested through concurrent execution of multiple attack types on computer networks — influence both the informativeness of feature space assessments and classification accuracy. The Random Forest algorithm is employed as a representative model to quantify these effects. The practical relevance of this work lies in enhancing cyber attack detection and classification accuracy by explicitly accounting for multi-valued attribute dependencies. Experimental results demonstrate that incorporating such dependencies improves model performance, suggesting methodological refinements for security-focused machine learning pipelines.
Keywords: multivalued classification, attribute space, computer attacks, information security, classification of network traffic, attack detection, informative attributes, entropy
Malicious actors often exploit undetected vulnerabilities in systems to carry out zero-day attacks. Existing traditional detection systems, based on deep learning and machine learning methods, are not effective at handling new zero-day attacks. These attacks often remain incorrectly classified, as they represent new and previously unknown threats. The expansion of the Internet of Things (IoT) networks only contributes to the increase in such attacks. This work analyzes approaches capable of detecting zero-day attacks in IoT networks, based on an unsupervised approach that does not require prior knowledge of the attacks or the need to train intrusion detection systems (IDS) on pre-labeled data.
Keywords: Internet of Things, zero-day attack, autoencoder, machine learning, neural network, network traffic