This paper is devoted to the theoretical analysis and comparative characteristics of methods and algorithms for automatic identity verification based on the dynamic characteristics of a handwritten signature. The processes of collecting and preprocessing dynamic characteristics are considered. An analysis of classical methods, including hidden Markov models, support vector machines, and modern neural network architectures, including recurrent, convolutional, and Siamese neural networks, is conducted. The advantages of using Siamese neural networks in verification tasks under the condition of a small volume of training data are highlighted. Key metrics for assessing the quality of biometric systems are defined. The advantages and disadvantages of the considered methods are summarized, and promising areas of research are outlined.
Keywords: verification, signature, machine learning, dynamic characteristic, hidden Markov models, support vector machine, neural network approach, recurrent neural networks, convolutional neural networks, siamese neural networks, type I error
The article provides an overview of the current state of password authentication and highlights the main problems. Various options for password-free authentication are being considered as a replacement for password authentication. Each option is analyzed in terms of disadvantages and the possibility of replacing passwords.
The analysis revealed that some alternatives can only act as an additional factor in multi-factor authentication, such as OTP and push notifications. Others, on the contrary, should not be used as an authentication method at all; these include QR codes.
As a result of the analysis, two directions of password-free authentication were identified as clear favorites: biometric and passkey. When comparing the finalists, the choice fell on passkey, since it does not have the main and critical drawback of biometric authentication - dependence on concealing the originals of biometrics. In case of biometrics compromise, a person gets huge problems, since without surgical intervention he cannot change it.
Passkey, on the contrary, demonstrates a high level of protection, comparable to biometrics, but is devoid of such a drawback. At the same time, passkey, or rather the current FIDO2 standard, has a few shortcomings that hinder distribution. These include the potential possibility of using malware as a client. Another, no less important problem is unlinking the old and linking a new key in case of loss or failure of the first one.
To solve this problem, it is necessary to develop a secure authentication protocol using passkey technology.
Keywords: password authentication, passwordless authentication, push notification, QR-code, biometric authentication, passkey, FIDO2, WebAuthn, CTAP2.1
The paper proposes a method to counteract unauthorized privilege escalation in the Android operating system. The proposed method involves using the ARM architecture’s hardware virtualization technology to control access to to the operating system’s kernel data structures that store task identification information.
Keywords: information security, privilege escalation, Android, hypervisor
The paper deals with a problem of assessing the level of security of critical information infrastructure objects in the financial sector based on organizational structure and management factors in the context of internal audit. Standards do not allow flexible assessment of indicators characterizing information security requirements and propose to obtain expert assessments based on subjectively selected elements (documents, facts) related to certain requirements. The article considers a Bayesian approach to assessing the values of private indicators for all available characteristics of information security requirements, which allows obtaining them on a continuous scale. A corresponding model is presented that includes the calculation of private and generalized indicator values. It improves the approach to assessing the level of security of critical information infrastructure objects during internal audit, as defined by standards, from the point of view of assessing private indicator values on a continuous scale and taking into account the influence of the history of changes in the characteristics of information security requirements.
Keywords: information security, Bayesian approach, critical information infrastructure objects, indicators of compliance with information security requirements, level of protection of objects, model with probabilistic components
The paper presents a methodology that includes stages of task performance control, data collection and analysis, determination of reliability and efficiency criteria, reasonable selection, communication, implementation and control of the results of management decisions. A cyclic algorithm for comprehensive verification of compliance with the reliability and efficiency criteria of the system has been developed, allowing for prompt response to changes, increased system stability and adaptation to adverse environmental impacts. Improved mathematical formulas for assessing the state of organizational systems are proposed, including calculation of the readiness factor, level of planned task performance and compliance with established requirements. The application of the methodology is aimed at increasing the validity of decisions made while reducing the time for decision-making, as well as ensuring the relevance, completeness and reliability of information in information resources in the interests of sustainable development of organizational systems.
Keywords: algorithms, time, control, reliability and efficiency criteria, indicators, resources, management decisions, cyclicity
This article examines the growing threat of web scraping (parsing) as a form of automated cyberattack, particularly aimed. Although scraping publicly available data is often legal, its misuse can lead to serious consequences, including server overload, data breaches and intellectual property infringement. Recent court cases against OpenAI and ChatGPT highlight the legal uncertainty associated with unauthorized data collection.
The study presents a dual approach to combat malicious scraping. Traffic Classification Model - a machine learning based solution using Random Forest algorithms results in performance that achieves 89% accuracy in distinguishing between legitimate and malicious bot traffic, enabling early detection of scraping attempts. Data Deception Technique - the countermeasure dynamically modifies HTML content to convey false information to scrapers while maintaining the original look of the page. This technique prevents data collection without affecting the user experience.
Performance results include real-time traffic monitoring, dynamic page obfuscation, and automatic response systems.
The proposed system demonstrates effectiveness in mitigating the risks associated with scraping and emphasizes the need for adaptive cybersecurity measures in evolving digital technologies.
Keywords: parsing, automated attacks, data protection, bot detection, traffic classification, machine learning, attack analysis, data spoofing, web security
The article analyzes various approaches to the generation and detection of audio deepfakes. Particular attention is paid to the preprocessing of acoustic signals, extraction of voice signal parameters, and data classification. The study examines three groups of classifiers: Support Vector Machines (SVM), K-Nearest Neighbors (KNN), and neural networks. For each group, effective methods were identified, and the most successful approaches were determined based on a comprehensive analysis. The study revealed two approaches demonstrating high accuracy and reliability: a detector based on temporal convolutional networks analyzing MFCC-cepstrogram achieved an EER metric of 0.07%, while the Support Vector Machine with a radial basis function kernel reached an EER of 0.5%. Additionally, the latter method demonstrated the following metrics on the ASVspoof 2021 dataset: Accuracy = 99.6%, F1-score = 0.997, Precision = 0.998, and Recall = 0.994.
Keywords: audio deepfakes, preprocessing of acoustic signals, support vector machine, k-nearest neighbors, neural networks, temporal convolutional networks, deepfake detection
This paper is devoted to the theoretical analysis of the methods used in verifying the dynamics of a signature obtained from a graphic tablet. A classification of three fundamental approaches to solving this problem is carried out: matching with a standard; stochastic modeling and discriminative classification. Each approach in this paper is considered using a specific method as an example: dynamic transformation of the time scale; hidden Markov models; support vector machine. For each method, the theoretical foundations are disclosed, the mathematical apparatus is presented, the main advantages and disadvantages are identified. The results of the comparative analysis can be used as the necessary theoretical basis for developing modern signature dynamics verification systems.
Keywords: verification, biometric authentication, signature dynamics, graphic tablet, classification of methods, matching with a standard, stochastic modeling, discriminative classification, hidden Markov models, dynamic transformation of the time scale
The article presents the results of a study on the effectiveness of the hashing algorithms Argon2, Scrypt, and Bcrypt in the context of developing web applications with user registration and authentication features. The main focus of this research is on analyzing the algorithms' resilience to brute-force attacks, hardware attacks (GPU/ASIC), as well as evaluating their computational performance. The results of the experiments demonstrate the advantages of Scrypt in terms of balancing execution time and security. Recommendations for selecting algorithms based on security and performance requirements are also provided.
Keywords: hashing algorithm, user registration interface, user authentication interface, privacy protection
The purpose of the article is to review the criteria that affect the functionality of the platform to deceive attackers, to identify the strengths and weaknesses of the technology, to consider current trends and areas for further research. The method of study is analysis of existing articles in peer-reviewed Russian and foreign sources, aggregation of research, and formation of conclusions based on the analyzed sources. The article discusses basic and situational metrics to consider when selecting and evaluating a trap - cost of implementation, design complexity, risk of compromise, data collected, strength of deception, available connections, false positive rate, attack attribution, attack complexity, time to compromise, diversity of interactions, early warning, effectiveness of attack repellency, impact on attacker behavior, threats detected by the trap, resilience. A breakdown of the strengths and weaknesses of Deception technology, which are worth paying attention to when using it. Deception platform development trends are reviewed, as well as areas of research in which the platform is under-researched.
Keywords: false target infrastructure, deception platform, honeypot, honeytoken, honeynet
The purpose of the article: to determine the possibility of using file hash analysis using artificial neural networks to detect exploits in files. Research method: the search for exploits in files is carried out based on the analysis of Windows registry file hashes obtained by two hashing algorithms SHA-256 and SHA-512, using three types of artificial neural networks (direct propagation, recurrent, convolutional). The obtained result: the use of artificial neural networks in file hash analysis allows us to identify exploits or malicious records in files; the performance (accuracy) of artificial neural networks of direct propagation and with recurrent architecture are comparable to each other and are much more productive than convolutional neural networks; the longer the length of the file hash, the more reliably it is to identify an exploit in the file
Keywords: malware, exploit, neural networks, hashing, modeling
This study addresses the challenges of evaluating feature space dimensionality in the context of multi-label classification of cyber attacks. The research focuses on tabular data representations collected through a hardware-software simulation platform designed to emulate multi-label cyber attack scenarios. We investigate how multi-label dependencies — manifested through concurrent execution of multiple attack types on computer networks — influence both the informativeness of feature space assessments and classification accuracy. The Random Forest algorithm is employed as a representative model to quantify these effects. The practical relevance of this work lies in enhancing cyber attack detection and classification accuracy by explicitly accounting for multi-valued attribute dependencies. Experimental results demonstrate that incorporating such dependencies improves model performance, suggesting methodological refinements for security-focused machine learning pipelines.
Keywords: multivalued classification, attribute space, computer attacks, information security, classification of network traffic, attack detection, informative attributes, entropy
Malicious actors often exploit undetected vulnerabilities in systems to carry out zero-day attacks. Existing traditional detection systems, based on deep learning and machine learning methods, are not effective at handling new zero-day attacks. These attacks often remain incorrectly classified, as they represent new and previously unknown threats. The expansion of the Internet of Things (IoT) networks only contributes to the increase in such attacks. This work analyzes approaches capable of detecting zero-day attacks in IoT networks, based on an unsupervised approach that does not require prior knowledge of the attacks or the need to train intrusion detection systems (IDS) on pre-labeled data.
Keywords: Internet of Things, zero-day attack, autoencoder, machine learning, neural network, network traffic
One of the elements of the organization's information infrastructure, which is aimed at organizing a comprehensive system for protecting confidential information, is the electronic document management system. The market for electronic document management systems is showing continuous growth due to its advantages, which underlines the relevance of ensuring information security in such systems. The article analyzes the current types and channels of information leakage in the electronic document management system.
Keywords: document management, confidentiality of information, electronic document management, information leaks, information security, information security issues, information security system
The current paper describes the main features of traffic processing and filtering in NGFW solution xFirewall from Infotecs, the main task of the device is to filter traffic at OSI model layers from network to application, in particular there is a possibility to filter specific applications and application protocols. The essence of a session stateful firewall is revealed, and how to verify the correctness of traffic processing by the firewall is described. In conclusion, a proof of compliance of the xFirewall solution with the NGFW class of products is made.
Keywords: firewall, ngfw, xfirewall, spi, import substitution, critical information infrastructure, information protection, dpi, traffic filtering, statefull packet inspection, ViPNet, firewall
The objective of the study is to analyze the methods of describing a computer incident in the field of information security when identifying illegal events and testing cyber-physical systems to improve the quality of work with documentation when protecting cyber-physical systems. To achieve this goal, it is necessary to develop a format for describing incidents. For this purpose, regulatory documents were analyzed, types of computer incidents and their classification were identified, incident criteria were defined, and the degrees of criticality of the consequences when they occurred were identified. A document was developed to describe the incident. These studies are carried out in conjunction with work on developing methods for monitoring and testing the security of cyber-physical systems for automatic detection of illegal operation and (or) abnormal operation in a cyber-physical system. Based on the research results, an algorithm of actions and methods for identifying and preventing the consequences of computer incidents will be formed, due to which it will be possible to increase the security of cyber-physical systems.
Keywords: information security event, computer incident, information system, incident description, documentation generation, incident card, cybersecurity, cyber-physical system
This paper describes the process of concatenation of neural network architectures in face image and voice recognition during training. For training of neural networks the extracted features of face image and acoustic signal are used as input vectors. The results obtained and comparative performance of different methods of user recognition of computer information system based on neural networks are presented.
Keywords: biometric authentication, voice, dataset, face image, computer information system, concatenation, neural network
This paper describes a virtualized environment designed to conduct comprehensive experiments involving peer-to-peer networks and information security algorithms. The architecture is based on integrating the VMware hypervisor with the EVE-NG network device emulation platform, providing flexible resource allocation and realistic topology simulation. A MikroTik router serves as the central node, enabling a “star-shaped” scheme of interaction among virtual machines running various operating systems (Windows 7, 10, 11, Linux Debian). The chosen configuration simplifies testing the multiple initial connections and multi-level cryptography algorithms, ensures stable routing, and supports further automation of software installation using Bash or PowerShell scripts.
Keywords: information security, virtualized environment, multiple initial connections, peer-to-peer network, virtual private network
The article discusses modern machine learning algorithms used to detect and prevent denial of service (DoS) attacks. The work analyzes various approaches, such as traffic classification, system behavior anomaly, and network packet analysis, which allow the authors to develop an early warning system for possible attacks. The prospects for using machine learning to provide more reliable protection for network infrastructure are also discussed based on the results of the experiments. The results of the paper are of great scientific and practical value for specialists in cybersecurity and in modeling defense systems.
Keywords: information security, denial of service, machine learning, network traffic
The process of ensuring information security is inextricably linked with the assessment of compliance with the requirements. In the field of information protection, this process is called an information security audit. Currently, there are many international and domestic audit standards that describe various processes and methods for assessing compliance with requirements. One of the key drawbacks of these standards is the use of exclusively qualitative assessment without numerical calculations, which in turn does not allow making the procedure the most objective. The use of fuzzy logic allows providing the audit process with an appropriate quantitative assessment, while operating with understandable linguistic variables. The article analyzes existing standards and presents a conceptual model for applying the fuzzy set method in the process of information security audit.
Keywords: information security, information infrastructure, security audit, risk analysis, fuzzy sets, fuzzy logic
The purpose of the study is to develop a platform that allows for various types of checks to identify weaknesses in the subsystems of unmanned automated systems.
Research methods: when developing the platform, a methodology based on the construction of ontological models was used, which made it possible to link the structural and functional characteristics of unmanned automated systems with threats and vulnerabilities, as well as with attacks on such systems. The process parallelization method was used to scan radio frequency ranges. The decision-making system is based on risk assessment methods.
Research results: the platform allows for optimizing the security testing process of unmanned automated systems. For automated testing, a database is used that includes a catalog of structural and functional characteristics, threats, vulnerabilities, and attacks. The platform can determine which types of structural and functional characteristics correspond to the vulnerabilities of unmanned automated systems. A system consisting of individual components (a sensor for scanning unmanned automated systems, an intelligent system for active analysis of unmanned automated systems). The sensor for scanning unmanned automated systems is implemented as a small-sized device. The system of intelligent active analysis of unmanned automated systems is implemented as software.
The scientific novelty lies in the development of a concept for a system for analyzing the safety of unmanned automated systems based on ontological models and radio frequency range analysis to identify system vulnerabilities during pre-operational checks.
Keywords: data analysis, statistics, attacks, risks, unmanned automated systems
This article presents an analysis of corporate network traffic over the SMTP protocol to identify malicious traffic. The relevance of the study is driven by the increasing number of email-based attacks, such as the distribution of viruses, spam, and phishing messages. The objective of the work is to develop an algorithm for detecting malicious traffic that combines traditional analysis methods with modern machine learning approaches. The article describes the research stages: data collection, preprocessing, model training, algorithm testing, and effectiveness analysis. The data used were collected with the Wireshark tool and include SMTP logs, message headers, and attachments. The experimental results demonstrated high accuracy in detecting malicious traffic, confirming the potential of the proposed approach.
Keywords: SMTP, malicious traffic, network traffic analysis, email, machine learning, Wireshark, spam, phishing, classification algorithms
The widespread use of social media platforms has led to the accumulation of vast amounts of stored data, enabling the prediction of rare events based on user interaction analysis. This study presents a method for predicting rare events using graph theory, particularly graphlets. The social network VKontakte, with over 90 million users, serves as the data source. The ORCA algorithm is utilized to identify characteristic graph structures within the data. Throughout the study, user interactions were analyzed to identify precursors of rare events and assess prediction accuracy. The results demonstrate the effectiveness of the proposed method, its potential for threat monitoring, and the possibilities for further refinement of graphlet-based prediction models.
Keywords: social media, security event, event prediction, graph theory, graphlet, interaction analysis, time series analysis, correlation analysis, data processing, anomalous activity
In modern conditions of digital transformation, companies are actively implementing customer Relationship Management systems (CRM systems) to manage customer relationships. However, the issues of data protection, confidentiality and transparency of interaction remain critically important. This article explores the possibilities of using blockchain technology to enhance the security of CRM systems and improve trust between businesses and customers. The purpose of the work is to analyze the potential of using blockchain in data protection of CRM systems, as well as to assess its impact on the transparency of customer transactions. The paper examines the main threats to data security in CRM, the principles of blockchain technology and its key advantages in this context, including decentralization, immutability of records and protection from unauthorized access. Based on the analysis, promising areas of blockchain integration into CRM systems have been identified, practical recommendations for its application have been proposed, and the potential effectiveness of this technology has been assessed. The results of the study may be useful to companies interested in strengthening the protection of customer data and increasing the transparency of user interaction processes.
Keywords: blockchain, CRM-system, security, data protection, transparency, customer interaction
Zero-day attacks are one of the most dangerous threats to the security of modern systems, applications and infrastructure because they are unpredictable. Due to the unknown signatures of zero-day attacks, traditional signature-based defences are unable to detect them. Countering such attacks in IoT networks requires both in-depth research and the implementation of practical measures. The present review of state-of-the-art zero-day attack detection research has shown that deep learning approaches are best at detecting zero-day attacks and botnets in IoT networks. These approaches can analyse anomalies in network traffic and identify new threats and zero-day attacks while minimising the number of false positives.
Keywords: Zero-Day Attack, vulnerability, Internet of Things, machine learning, anomaly, signature-based defence method, autoencoder, network traffic