Modern intelligent control systems (ICS) are complex software and hardware systems that use artificial intelligence, machine learning, and big data processing to automate decision-making processes. The article discusses the main tools and technologies used in the development of ICS, such as neural networks, deep learning algorithms, expert systems and decision support systems. Special attention is paid to the role of cloud computing, the Internet of Things and cyber-physical systems in improving the efficiency of intelligent control systems. The prospects for the development of this field are analyzed, as well as challenges related to data security and interpretability of models. Examples of the successful implementation of ICS in industry, medicine and urban management are given.
Keywords: intelligent control systems, artificial intelligence, machine learning, neural networks, big data, Internet of things, cyber-physical systems, deep learning, expert systems, automation
In this work, we present the development and analysis of a feature model for dynamic handwritten signature recognition to improve its effectiveness. The feature model is based on the extraction of both global features (signature length, average angle between signature vectors, range of dynamic characteristics, proportionality coefficient, average input speed) and local features (pen coordinates, pressure, azimuth, and tilt angle). We utilized the method of potentials to generate a signature template that accounts for variations in writing style. Experimental evaluation was conducted using the MCYT_Signature_100 signature database, which contains 2500 genuine and 2500 forged samples. We determined optimal compactness values for each feature, enabling us to accommodate signature writing variability and enhance recognition accuracy. The obtained results confirm the effectiveness of the proposed feature model and its potential for biometric authentication systems, presenting practical interest for information security specialists.
Keywords: dynamic handwritten signature, signature recognition, biometric authentication, feature model, potential method, MCYT_Signature_100, FRR, FAR
The article examines how the replacement of the original data with transformed data affects the quality of training of deep neural network models. The author conducts four experiments to assess the impact of data substitution in tasks with small datasets. The first experiment consists in training the model without making changes to the original data set, the second is to replace all images in the original set with transformed ones, the third is to reduce the number of original images and expand the original data set using transformations applied to images, and also in the fourth experiment, the data set is expanded in order to balance the number of images There are more in each class.
Keywords: dataset, extension, neural network models, classification, image transformation, data replacement
The article describes the methodology for constructing a regression model of occupancy of paid parking zones taking into account the uneven distribution of sessions during the day and the behavioral characteristics of two groups of clients - the regression model consists of two equations that take into account the characteristics of each group. In addition, the process of creating a data model, collecting, processing and analyzing data, distribution of occupancy during the day is described. Also, the methodology for modeling a phenomenon whose distribution has the shape of a bell and depends on the time of day is given. The results can be used by commercial enterprises managing parking lots and city administrations, researchers when modeling similar indicators that demonstrate a normal distribution characteristic of many natural processes (customer flow in bank branches, replenishment and / or withdrawal of funds during the life of replenished deposits, etc.).
Keywords: paid parking, occupancy, regression model, customer behavior, behavioral segmentation, model robustness, model, forecast, parking management, distribution
Software has been developed to evaluate the surface characteristics of liquids, solutions and suspensions in the Microsoft Visual Studio environment. The module with a user-friendly interface does not require special skills from the user and allows for a numerical calculation of the energy characteristics of the liquid in a time of ~ 1 second: adhesion, cohesion, wetting energy, spreading coefficient and adhesion of the liquid composition to the contact surface. Using the example of a test liquid - distilled water and an initial liquid separation lubricant of the Penta-100 series, an example of calculating the wetting of a steel surface with liquid media is demonstrated. Optical microscopy methods have shown that good lubrication of the steel surface ensures the formation of a homogeneous, defect-free coating. The use of the proposed module allows for an express assessment of the compatibility of liquid formulations with the protected surface and is of interest to manufacturers of paint and varnish materials in product quality control.
Keywords: computer program, C# programming language, wetting, surface, adhesion
A review of various approaches used to model the contact interaction between the grinding wheel grain and the surface layer of the workpiece during grinding is presented. In addition, the influence of material properties, grinding parameters and grain morphology on the contact process is studied.
Keywords: grinding, grain, contact zone, modeling, grinding wheel, indenter, micro cutting, cutting depth
Linear feedback shift registers (LFSR) and the pseudo-random sequences of maximum length (m-sequences) generated by them have become widely used in solving problems of mathematical modeling, cryptography, radar and communications. The wide distribution is due to their special properties, such as correlation. An interesting, but rarely discussed in the scientific literature of recent years, property of these sequences is the possibility of forming quasi-orthogonal matrices on their basis.In this paper, was conducted a study of methods for generating quasi-orthogonal matrices based on pseudo-random sequences of maximum length (m-sequences). An analysis of the existing method based on the cyclic shift of the m-sequence and the addition of a border to the resulting cyclic matrix is carried out. Proposed an alternative method based on the relationship between pseudo-random sequences of maximum length and quasi-orthogonal Mersenne and Hadamard matrices, which allows generating cyclic quasi-orthogonal matrices of symmetric structure without a border. A comparative analysis of the correlation properties of the matrices obtained by both methods and the original m-sequences is performed. It is shown that the proposed method inherits the correlation properties of m-sequences, provides more efficient storage, and is potentially better suited for privacy problems.
Keywords: orthogonal matrices, quasi-orthogonal matrices, Hadamard matrices, m-sequences
The article examines the transition of universities from data warehouses to data lakes, revealing their potential in processing big data. The introduction highlights the main differences between storage and lakes, focusing on the difference in the philosophy of data management. Data warehouses are often used for structured data with relational architecture, while data lakes store data in its raw form, supporting flexibility and scalability. The section ""Data Sources used by the University"" describes how universities manage data collected from various departments, including ERP systems and cloud databases. The discussion of data lakes and data warehouses highlights their key differences in data processing and management methods, advantages and disadvantages. The article examines in detail the problems and challenges of the transition to data lakes, including security, scale and implementation costs. Architectural models of data lakes such as ""Raw Data Lake"" and ""Data Lakehouse"" are presented, describing various approaches to managing the data lifecycle and business goals. Big data processing methods in lakes cover the use of the Apache Hadoop platform and current storage formats. Processing technologies are described, including the use of Apache Spark and machine learning tools. Practical examples of data processing and the application of machine learning with the coordination of work through Spark are proposed. In conclusion, the relevance of the transition to data lakes for universities is emphasized, security and management challenges are emphasized, and the use of cloud technologies is recommended to reduce costs and increase productivity in data management. The article examines the transition of universities from data warehouses to data lakes, revealing their potential in processing big data. The introduction highlights the main differences between storage and lakes, focusing on the difference in the philosophy of data management. Data warehouses are often used for structured data with relational architecture, while data lakes store data in its raw form, supporting flexibility and scalability. The section ""Data Sources used by the University"" describes how universities manage data collected from various departments, including ERP systems and cloud databases. The discussion of data lakes and data warehouses highlights their key differences in data processing and management methods, advantages and disadvantages. The article examines in detail the problems and challenges of the transition to data lakes, including security, scale and implementation costs. Architectural models of data lakes such as ""Raw Data Lake"" and ""Data Lakehouse"" are presented, describing various approaches to managing the data lifecycle and business goals. Big data processing methods in lakes cover the use of the Apache Hadoop platform and current storage formats. Processing technologies are described, including the use of Apache Spark and machine learning tools. Practical examples of data processing and the application of machine learning with the coordination of work through Spark are proposed. In conclusion, the relevance of the transition to data lakes for universities is emphasized, security and management challenges are emphasized, and the use of cloud technologies is recommended to reduce costs and increase productivity in data management.
Keywords: data warehouse, data lake, big data, cloud storage, unstructured data, semi-structured data
The paper presents a method for quantitative assessment of zigzag trajectories of vehicles, which allows to identify potentially dangerous behavior of drivers. The algorithm analyzes changes in direction between trajectory segments and includes data preprocessing steps: merging of closely spaced points and trajectory simplification using a modified Ramer-Douglas-Pecker algorithm. Experiments on a balanced data set (20 trajectories) confirmed the effectiveness of the method: accuracy - 0.8, completeness - 1.0, F1-measure - 0.833. The developed approach can be applied in traffic monitoring, accident prevention and hazardous driving detection systems. Further research is aimed at improving the accuracy and adapting the method to real-world conditions.
Keywords: trajectory, trajectory analysis, zigzag, trajectory simplification, Ramer-Douglas-Pecker algorithm, yolo, object detection
In this paper, a new model of an open multichannel queuing system with mutual assistance between channels and limited waiting time for a request in a queue is proposed. General mathematical dependencies for the probabilistic characteristics of such a system are presented.
Keywords: queuing system, queue, service device, mutual assistance between channels
Currently, key aspects of software development include the security and efficiency of the applications being created. Special attention is given to data security and operations involving databases. This article discusses methods and techniques for developing secure applications through the integration of the Rust programming language and the PostgreSQL database management system (DBMS). Rust is a general-purpose programming language that prioritizes safety as its primary objective. The article examines key concepts of Rust, such as strict typing, the RAII (Resource Acquisition Is Initialization) programming idiom, macro definitions, and immutability, and how these features contribute to the development of reliable and high-performance applications when interfacing with databases. The integration with PostgreSQL, which has been demonstrated to be both straightforward and robust, is analyzed, highlighting its capacity for efficient data management while maintaining a high level of security, thereby mitigating common errors and vulnerabilities. Rust is currently used less than popular languages like JavaScript, Python, and Java, despite its steep learning curve. However, major companies see its potential. Rust modules are being integrated into operating system kernels (Linux, Windows, Android), Mozilla is developing features for Firefox's Gecko engine and StackOverflow surveys show a rising usage of Rust. A practical example involving the dispatch of information related to class schedules and video content illustrates the advantages of utilizing Rust in conjunction with PostgreSQL to create a scheduling management system, ensuring data integrity and security.
Keywords: Rust programming language, memory safety, RAII, metaprogramming, DBMS, PostgreSQL
This article provides an overview of existing structural solutions for in-line robots designed for inspection work. The main attention is paid to the analysis of various motion mechanisms and chassis types used in such robots, as well as to the identification of their advantages and disadvantages in relation to the task of scanning a longitudinal weld. Such types of robots as tracked, wheeled, helical and those that move under the influence of pressure inside the pipe are considered. Special attention is paid to the problem of ensuring stable and accurate movement of the robot along the weld, minimizing lateral displacements and choosing the optimal positioning system. Based on the analysis, recommendations are offered for choosing the most appropriate type of motion and chassis to perform the task of constructing a 3D model of a weld using a laser triangulation sensor (hereinafter referred to as LTD).
Keywords: in-line work, inspection work, 3D scanning, welds, structural solutions, types of movement, chassis, crawler robots, wheeled robots, screw robots, longitudinal welds, laser triangulation sensor
The railway transport industry demonstrates significant achievements in various fields of activity through the introduction of predictive analytics. Predictive analytics systems use data from a variety of sources, such as sensor networks, historical data, weather conditions, etc. The article discusses the key areas of application of predictive analytics in railway transport, as well as the advantages, challenges and prospects for further development of this technology in the railway infrastructure.
Keywords: predictive analytics in railway transport, passenger traffic forecasting, freight optimization, maintenance optimization, inventory and supply management, personnel management, financial planning, big data analysis
A Simulink model is considered that allows calculating transient processes of objects described using a transient function for any type of input action. An algorithm for the operation of the S-function that performs calculations using the Duhamel integral is described. It is shown that due to the features of the S-function, it can store the values of the previous step of the Simulink model calculation. This allows the input signal to be decomposed into step components and the time of occurrence of each step and its value to be stored. For each step of the input signal increment, the S-function calculates the response by scaling the transient response. Then, at each step of the calculation, the sum of such reactions is found. The S-function provides a procedure for freeing memory when the end point of the transient response is reached at each step. Thus, the amount of memory required for the calculation does not increase above a certain limit, and, in general, does not depend on the length of the model time. For calculations, the S-function uses matrix operations and does not use cycles. Due to this, the speed of model calculation is quite high. The article presents the results of calculations. Recommendations are given for setting the parameters of the model. A conclusion is formulated on the possibility of using the model for calculating dynamic modes.
Keywords: simulation modeling, Simulink, step response, step function, S-function, Duhamel integral.
The article provides a rationale for the hypothesis about the possibility of influencing changes in the destructive ability of genetic algorithm (GA) operators on the trajectory of population movement in the solution space directly during the operation of the evolutionary procedure for labor-intensive tasks. To solve this problem, it is proposed to use a control superstructure from an artificial neural network (ANN) or the "random forest" algorithm. The hypothesis is confirmed based on the results of computational experiments. This study presents the results obtained with calculations on CPU and CPU + GPGPU in a resource-intensive task of synthesizing dynamic simulation models of business processes using the mathematical apparatus of the Petri net theory (PN), and a comparison with the operation of GA without a control superstructure, GA and a control superstructure based on ANN of the RNN class, GA and the "random forest" algorithm. To simulate the operation of GA, ANN, the "random forest" algorithm, business process models, it is proposed to use a graph representation using various extensions of PN, examples of modeling the selected methods using the proposed mathematical apparatus are given. For the operation of the ANN and the random forest algorithm for recognizing the state of the GA population, a number of rules are proposed that allow the management of the solution synthesis process. Based on the computational experiments and their analysis, the strengths and weaknesses of using the proposed machine learning algorithms as a control superstructure are shown. The proposed hypothesis was confirmed based on the results of computational experiments.
Keywords: "Petri net, decision tree, random forest, machine learning, Petri net theory, bipartite directed graph, intelligent systems, evolutionary algorithms, decision support systems, mathematical modeling, graph theory, simulation modeling
The article describes the mathematical foundations of time-frequency analysis of signals using the algorithms Empirical Mode Decomposition (EMD), Intrinsic Time-Scale Decomposition (ITD) and Variational Mode Decomposition (VMD). Synthetic and real signals distorted by additive white Gaussian noise with different signal-to-noise ratio are considered. A comprehensive comparison of the EMD, ITD and VMD algorithms has been performed. The possibility of using these algorithms in the tasks of signal denoising and spectral analysis is investigated. The estimation of algorithm execution time and calculation stability is performed.
Keywords: time-frequency analysis, denoising, decomposition, mode, Hilbert-Huang transformation, Empirical Mode Decomposition, Intrinsic Time-Scale Decomposition, Variational Mode Decomposition
The paper proposes an approach to improve the efficiency of machine learning models used in monitoring tasks using metric spaces. To solve this problem, a method is proposed for assessing the quality of monitoring systems based on interval estimates of the response zones to a possible incident. This approach extends the classical metrics for evaluating machine learning models to take into account the specific requirements of monitoring tasks. The calculation of interval boundaries is based on probabilities derived from a classifier trained on historical data to detect dangerous states of the system. By combining the probability of an incident with the normalized distance to incidents in the training sample, it is possible to simultaneously improve all the considered quality metrics for monitoring - accuracy, completeness, and timeliness. One approach to improving results is to use the scalar product of the normalized components of the metric space and their importance as features in a machine learning model. The permutation feature importance method is used for this purpose, which does not depend on the chosen machine learning algorithm. Numerical experiments have shown that using distances in a metric space of incident points from the training sample can improve the early detection of dangerous situations by up to two times. This proposed approach is versatile and can be applied to various classification algorithms and distance calculation methods.
Keywords: monitoring, machine learning, state classification, incident prediction, lead time, anomaly detection
The article discusses the problems of wear of the feeding machine rollers associated with speed mismatch in the material tracking mode. Existing methods of dealing with wear and tear struggle with the effect of the problem not the cause. One of the ways to reduce the intensity of wear of roller barrels is to develop a method of controlling the speed of the feeding machin, which reduces the mismatch between the speeds of rollers and rolled products without violating the known technological requirements for creating pulling and braking forces. Disclosed is an algorithm for calculating speed adjustment based on metal tension which compensates for roller wear and reduces friction force. Modeling of the system with the developed algorithm showed the elimination of speed mismatch during material tracking and therefore it will reduce the intensity of roller wear.
Keywords: speed correction system, feeding machine, roller wear, metal tension, control system, speed mismatch, friction force reduction
PHP Data Objects (PDOs) represent a significant advancement in PHP application development by providing a universal approach to interacting with database management systems (DBMSs). This article opens with an introduction describing the need for PDOs as of PHP 5.1, which allows PHP developers to interact with different databases through a single interface, minimising the effort involved in portability and code maintenance. It discusses how PDO can improve security by supporting prepared queries, which is a defence against SQL injection. The main part of the paper analyses the key advantages of PDO, such as its versatility in connecting to multiple databases (e.g. MySQL, PostgreSQL, SQLite), the ability to use prepared queries to enhance security, improved error handling through exceptions, transactional support for data integrity, and the ease of learning the PDO API even for beginners. Practical examples are provided, including preparing and executing SQL queries, setting attributes via the setAttribute method, and performing operations in transactions, emphasising the flexibility and robustness of PDO. In addition, the paper discusses best practices for using PDO in complex and high-volume projects, such as using prepared queries for bulk data insertion, query optimisation and stream processing for efficient handling of large amounts of data. The conclusion section characterises PDO as the preferred tool for modern web applications, offering a combination of security, performance and code quality enhancement. The authors also suggest directions for future research regarding security test automation and the impact of different data models on application performance.
Keywords: PHP, PDO, databases, DBMS, security, prepared queries, transactions, programming
The article presents the main stages and recommendations for the development of an information and analytical system (IAS) based on geographic information systems (GIS) in the field of rational management of forest resources, providing for the processing, storage and presentation of information on forest wood resources, as well as a description of some specific examples of the implementation of its individual components and digital technologies. The following stages of IAS development are considered: the stage of collecting and structuring data on forest wood resources; the stage of justifying the type of software implementation of the IAS; the stage of equipment selection; the stage of developing a data analysis and processing unit; the stage of developing the architecture of interaction of IAS blocks; the stage of developing the IAS application interface; the stage of testing the IAS. It is proposed to implement the interaction between the client and server parts based on Asynchronous JavaScript and XML (AJAX) technology. It is recommended to use the open source Leaflet libraries for visualization of geodata. To store large amounts of data on the server, it is proposed to use the SQLite database management system. The proposed approaches can find application in the creation of an IAS for the formation of management decisions in the field of rational management of forest wood resources.
Keywords: geographic information systems, forest resources, methodology, web application, AJAX technology, SQLite, Leaflet, information processing
More attention is being paid to the transition to domestic software with the digitalisation of the construction industry and import substitution. At each stage of construction, additional products are needed, including CAD and BIM. The experience of integration of Russian-made systems for the tasks of information modeling of transport infrastructure and road construction is considered. Within the framework of the work the integration of Vitro-CAD CDE and Topomatic Robur software system was performed. Joint work of the construction project participants in a single information space was organized. The efficiency of work of the project participants was determined due to the release from routine operations. Integration experience has shown that the combination of Vitro-CAD and Topomatic Robur allows to manage project data efficiently, store files with version tracking, coordinate documentation and issue comments to it.
Keywords: common data environment, information space, information model, digital ecosystem, computer-aided design, building information modeling, automation, integration, import substitution, software complex, platform, design documentation, road construction
When evaluating student work, the analysis of written assignments, particularly the analysis of source code, becomes particularly relevant. This article discusses an approach for evaluating the dynamics of feature changes in students' source code. Various metrics of source code are analyzed and key metrics are identified, including quantitative metrics, program control flow complexity metrics, and the TIOBE quality indicator. A set of text data containing program source codes from a website dedicated to practical programming, was used to determine threshold values for each metric and categorize them. The obtained results were used to conduct an analysis of students' source code using a developed service that allows for the evaluation of work based on key features, the observation of dynamics in code indicators, and the understanding of a student's position within the group based on the obtained values.
Keywords: machine learning, text data analysis, program code analysis, digital footprint, data visualization
This article discusses two of the most popular algorithms for constructing dominator trees in the context of static code analysis in the Solidity programming language. Both algorithms, the Cooper, Harvey, Kennedy iterative algorithm and the Lengauer-Tarjan algorithm, are considered effective and widely used in practice. The article compares these algorithms, evaluates their complexity, and selects the most preferable option in the context of Solidity. Criteria such as execution time and memory usage were used for comparison. The Cooper, Harvey, Kennedy iterative algorithm showed higher performance when working with small projects, while the Lengauer-Tarjan algorithm performed better when analyzing larger projects. However, overall, the Cooper, Harvey, Kennedy iterative algorithm was found to be more preferable in the context of Solidity as it showed higher efficiency and accuracy when analyzing smart contracts in this programming language. In conclusion, this article may be useful for developers and researchers who are involved in static code analysis in the Solidity language, and who can use the results and conclusions of this study in their work.
Keywords: dominator tree, Solidity, algorithm comparison
This article explores the probabilistic characteristics of closed queuing systems, with a particular focus on the differences between "patient" and "impatient" demands. These categories of requests play a crucial role in understanding the dynamics of service, as patient demands wait in line, while impatient ones may be rejected if their waiting time exceeds a certain threshold. The uniqueness of this work lies in the analysis of a system with a three-component structure of incoming flow, which allows for a more detailed examination of the behavior of requests and the influence of various factors on service efficiency. The article derives key analytical expressions for determining probabilistic characteristics such as average queue length, rejection probability, and other critical metrics. These expressions enable not only the assessment of the current state of the system but also the prediction of its behavior under various load scenarios. The results of this research may be useful for both theoretical exploration of queuing systems and practical application in fields such as telecommunications, transportation, and service industries. The findings will assist specialists in developing more effective strategies for managing request flows, thereby improving service quality and reducing costs.
Keywords: waiting, queue, service, markov process, queuing system with constraints, flow of requests, simulation modeling, mathematical model
Oil spills require timely measures to eliminate the causes and neutralize the consequences. The use of a case-based reasoning is promising to develop specific technological solutions in order to eliminate oil spills. It becomes important to structure the description of possible situations and the formation of a representation of solutions. In this paper, the results of these tasks are presented. A structure is proposed for representing situations in oil product spills based on a situation tree, a description of the algorithm for situational decision-making using this structure is given, parameters for describing situations in oil product spills and presenting solutions are proposed. The situation tree allows you to form a representation of situations based on the analysis of various source information. This approach makes it possible to quickly clarify the parameters and select similar situations from the knowledge base, the solutions of which can be used in the current undesirable situation.
Keywords: case-based reasoning; decision making; oil spill, oil spill response, decision support, situation tree