ISSA-ELM: A Network Security Situation Prediction Model (2024)

1. Introduction

With the rapid development of the information revolution and cyberspace, the Internet has greatly promoted the prosperity and progress of the economy and society. It has brought great convenience to people, along with new security risks and challenges. Traditional network security defense systems provide mainly passive defense based on firewalls and anti-virus software. These systems have huge defects in preventing security threats: they cannot deal with new viruses and cannot prevent attacks in advance. Once a system is invaded, it can suffer huge losses. As the final step of big data incorporated with AI-enabled cyber security situational awareness, network security situation prediction can analyze the previous and existing state of the network, and then can predict the future situation. It can formulate safe and effective preventive measures before the network is attacked, and shift from passive defense to active defense that can perform dynamic analysis, real-time monitoring, and trend prediction. Therefore, designing an effective and accurate network security situation prediction model is a key step in the transition from passive defense to active defense.

At present, there are various network security situation prediction models [1]. Traditional prediction models are divided into gray prediction models [2,3], D-S evidence theory models [4], and artificial intelligence models [5]. Gray prediction models require accurate mathematical expressions for prediction. They are computationally intensive and can only predict the general trend of a network situation. Meanwhile, D-S evidence theory models are qualitative knowledge models based on expert experience and qualitatively described data. Such models cannot effectively use quantitative data and may face combinatorial explosion. The accuracy of prediction is also biased due to the uncertainty of expert experience. These models cannot meet the situation prediction requirements of large and complex networks. Artificial intelligence models use various neural networks to optimize training on quantitative data, but their learning speed is slow. These models are also prone to overfitting. Zhang et al. [6] used the gray correlational-entropy method to analyze the correlation of the factors that affect network security, selected the key factors that affect the network security, established the corresponding process equations and prediction equations based on these factors, and predicted the network security situation recursively using Kalman filtering. Although the prediction accuracy is higher than that of the RBF algorithm, the calculation cost is higher. Furthermore, Wang et al. [7] proposed an improved D-S evidence theory for correlation analysis to fuse the reliability and rationality of the prediction results. The theory reduces the false-alarm rate of the system, but it cannot be used in large-scale networks. Ren et al. [8] established an improved BP neural network prediction model. The prediction results are consistent with actual situations, but the gradient descent characteristics of the BP neural network result in the algorithm having a lengthy training time. In order to reduce the training time, Zhu et al. [9] used two nonlinear mathematical modeling methods, the ELM and the multilayer perceptron neural network (MLPNN), to establish a prediction model that also has good adaptability.

In order to seek better algorithms to establish network security situation prediction models, this paper introduces the meta-heuristic search algorithm into an ELM. We also use the unique optimization capability of the meta-heuristic search algorithm to overcome the shortcomings of the ELM itself so that we can realize better training effects. In recent years, many researchers have proposed a variety of meta-heuristic search algorithms and their improved versions. Meta-heuristic search algorithms are a type of bio-inspired optimization algorithm obtained by abstract research on observation and simulation of the natural habits of some biological populations in nature. These algorithms generally have multiple advantages, such as proximity, stability, and adaptability, and are widely used in image processing [10] and feature selection [11]. Common meta-heuristic search algorithms include particle-swarm optimization [12], gray wolf optimization [13], whale optimization [14], sine cosine [15], salp swarm [16], and sparrow search [17] algorithms. Meta-heuristic search algorithms have the problem of easily falling into a local optimum, reducing the diversity of the population in later iterations. In response to this problem, many researchers have proposed improvements to various algorithms. For example, Liu et al. [18] introduced an adaptive leader-follower adjustment strategy to address the problem of unstable solution results of the bottle-ocean sheath group algorithm, which enhanced the stability of the algorithm. Zhou et al. [19] used cat-mapped chaotic sequences combined with the inverse-solution method instead of randomly generated initial populations in order to avoid the defects of premature convergence of the whale optimization algorithm, which enhanced the whale optimization algorithm in terms of initial population diversity and solution-seeking traversal. Zhou et al. [20] used tent chaotic mapping to improve the wolf initialization method to make the initial distribution of wolves more uniform and enhance the global search capability of the algorithm. Zhang et al. [21] proposed an improved whale optimization algorithm (NGS-WOA) based on nonlinear adaptive weights and golden sine operator. Firstly, NGS-WOA introduced nonlinear adaptive weight to enable the search agent to explore the search space adaptively and balance the development and exploration stages. Secondly, the improved golden sine operator was introduced into WOA algorithm. The improved strategy can effectively improve the performance of the algorithm, so that NGS-WOA has the advantages of strong global convergence and avoiding falling into local optimization. Zhang et al. [22] proposed a new Gaussian mutation operator for the fireworks algorithm, which makes sparks learn from more samples. At the same time, the rule-explosion operator of the fireworks algorithm was combined with the migration operator based on biogeographic optimization (BBO) to increase information sharing. Finally, a new overall selection strategy was adopted to make high-quality solutions possess a high probability of entering the next generation without high computing cost. Cheng et al. [23] used an improved tent chaos mapping to initialize the population, increased the population diversity, and added an adaptive local search strategy to improve the global search ability. Liang et al. [24] proposed an improved SSA search algorithm based on adaptive weights and improved boundary constraints. The adaptive weights improve the algorithm’s performance. The adaptive weights improve the convergence speed of the algorithm, and the improved boundary-handling strategy improves the convergence accuracy to a certain extent.

In this paper, we combine the advantages and disadvantages of the ELM and the meta-heuristic search algorithm to improve SSA in the meta-heuristic search algorithm. Then, by combining the improved SSA (ISSA) with an ELM, we propose an ISSA-ELM network security situation prediction model. By comparing the ISSA and six other algorithms on 15 benchmark functions, we verify the superior performance of the improved algorithm. We conduct network situation prediction experiments simultaneously with the traditional ELM algorithm and the GA-ELM algorithm presented by Gokul et al. [25]. The comparison verifies the practicability and accuracy of our model.

2. Extreme Learning Machine

The extreme learning machine was first proposed by Huang et al. [26] in 2004. By randomly selecting input-layer weights and hidden-layer biases, and based on a single hidden-layer feedforward neural network, the output-layer weights are calculated and analyzed according to the Moore–Penrose generalized inverse matrix theory. The extreme learning machine has the advantages of requiring few training parameters and being a fast learner with strong generalization ability. Let the number of nodes in the input layer, hidden layer, and output layer of the ELM be n, l and m, respectively. The network structure is shown in Figure 1.

For a given N arbitrarily different samples, ( x i , t i ) , where x i = ( x i 1 , x i 2 , , x i n ) T , t i = ( t i 1 , t i 2 , , t i m ) T , the output of the ELM is as follows:

f ( x j ) = i = 1 l β i g ( w i , b j , x j ) ; j = 1 , 2 , , N

where w i = ( w i 1 , w i 2 , , w i n ) T is the input weight between the input-layer neurons and the hidden layer neurons; β i = ( β i 1 , β i 2 , , β i m ) T is the output weight between the hidden-layer neurons and the output-layer neurons; is the bias of the hidden-layer neurons, and is the activation function of hidden-layer neurons. The matrix expression of the ELM system is as follows:

H β = Τ

where H = [ g ( w 1 , b 1 , x 1 ) g ( w 2 , b 2 , x 1 ) g ( w l , b l , x 1 ) g ( w 1 , b 1 , x 2 ) g ( w 2 , b 2 , x 2 ) g ( w l , b l , x 2 ) g ( w 1 , b 1 , x N ) g ( w 2 , b 2 , x N ) g ( w l , b l , x N ) ] N × l ; β = [ β 1 T β 2 T β l T ] l × m T ; Τ = [ t 1 T t 2 T t N T ] N × m T .

In order to achieve the final training effect of the ELM, the least-squares solution, β ^ , needs to be obtained so that:

H β ^ Τ = min β H β Τ

where H is the hidden-layer output matrix of the ELM network and is the expected output matrix of the network’s samples. Finally, the output weight is obtained by solving the formula:

β ^ = H Τ

where H is the Moore–Penrose generalized inverse matrix of the output matrix.

It can be concluded that the ELM does not need to use the gradient-descent method when training samples. Compared with the traditional back-propagation neural network that uses the gradient-descent method, our model greatly reduces the training time while retaining more accurate prediction capabilities.

3. Sparrow Search Algorithm and Its Improvement

3.1. Sparrow Search Algorithm

Tang et al. [27] pointed out that the sparrow search algorithm is abstracted as an explorer-follower-warning model with three position update formulas as:

x i , d t + 1 = { x i , d t exp ( i α i t e r max ) , R 2 < S T x i , d t + Q × L , R 2 S T

x i , d t + 1 = { Q exp ( x w o r s t d t x i , d t i 2 ) , i > n / 2 x b e s t d t + 1 + 1 D d = 1 D ( | x i , d t x b e s t d t + 1 | r a n d { 1 , 1 } )

x i , d t + 1 = { x b e s t d t + β | x i , d t x b e s t d t | , f i f g x i , d t + K ( | x i , d t x w o r s t d t | | f i f w | + ε ) , f i = f g

where x represents the sparrow position, t represents the number of iterations, Q is the random number, L is the 1 × d matrix, and R 2 is the alert value; β is the minimum constant to avoid zero-score error.

3.2. Initial Population by Cat Mapping Chaos

Chaos, as a nonlinear natural phenomenon, is widely used in optimization search problems for its chaotic sequences with the advantages of ergodicity and randomness. In order to maintain the diversity of populations and make the distribution among individuals as uniform as possible, this paper adopts a chaotic sequence-initialization strategy instead of the random population-generation method in the SSA algorithm. A variety of chaotic mappings have been formed in the optimization field, including logistic mapping, tent mapping and cat mapping. As a typical chaotic system, the probability density function of the sequence of logistic mapping, which obeys Chebyshev distribution, presents low density in the middle and high density on both sides of the mapping points, which leads to traversal inhom*ogeneity and affects the search efficiency of the algorithm. Chen et al. [28] proposed that the traversal uniformity and convergence speed of tent mapping are better than logistic mapping. However, tent mapping is prone to problems in small cycle periods and immobility points, and the optimal solution can only be found when it is only the edge value. To address the shortcomings of logistic mapping and tent mapping, the initial population of SSA algorithm is generated by cat mapping in this paper. The cat mapping expression is:

[ y i + 1 w i + 1 ] = [ 1 a 1 b 1 a 1 b 1 + 1 ] [ y i w i ] mod 1

where a 1 and b 1 are arbitrary real numbers, denoting the fractional part of finding a 1 .

Due to the simple structure of the cat map, it is not easy to fall into small cycles and immobility points, and the initial population generated by this map has better traversal uniformity.

According to the characteristics of the cat mapping, the steps of the method to generate a chaotic sequence in the feasible domain and combine it with the reverse solution initialization are: randomly generate a feasible solution for the current population, denoted as:

{ Y i = [ y i 1 , y i 2 , , y i d , , y i D ] ; y i d [ l b i d , u b i d ] }

Then the reverse solution is:

Y = [ Y 1 , Y 2 , , Y d , , Y D ] y i d = q ( l b i d u b i d ) y i d

where q is a uniform distribution on the interval [0, 1], l b i d and u b i d denote the upper and lower bounds of the feasible solutions.

3.3. Tent Chaos and Cauchy Variation Perturbation Strategy

We add to the sparrow search algorithm with a joint perturbation strategy of tent hybrid perturbation and Cauchy variance, and to avoid the tent hybrid falling into unstable cycle points, we add a random variable, and the improved tent hybrid expression is:

z i + 1 = { 2 z i + rand ( 0 , 1 ) × 1 N , 0 z 1 2 2 ( 1 z i ) + rand ( 0 , 1 ) × 1 N , 1 2 < z 1

The Cauchy variance is derived from the Cauchy distribution of continuous-type probability distribution [29], which is given by

m u t a t i o n ( x ) = x ( 1 + tan ( π ( u 0.5 ) ) )

The tent mixture perturbation and the Cauchy variance perturbation are used in different cases, respectively, so that the sparrow search algorithm is always tuned toward the optimal position.

3.4. Improved Explorer Location Update Formula

In the SSA algorithm, the sparrow explorer is only influenced by the position of the previous generation of explorers, and the value of exp ( i α i t e r max ) adaptively decreases in the process of continuous iteration; when its value is large, the explorer enters the extensive search mode. As its value decreases, the explorer mainly performs narrow-search mode, i.e., digging deeper at the optimal-solution attachment to improve the convergence accuracy of the algorithm. Therefore, the value of exp ( i α i t e r max ) is particularly important. Since a small change in the value of exp ( i α i t e r max ) has a large impact on the explorer, in this paper, based on Equation (7), the explorer update formula is changed to:

x i , d t + 1 = { x i , d t 2 exp ( 4 i α i t e r max ) m , R 2 < S T x i , d t + Q × L , R 2 S T

Let c = 2 exp ( 4 i α i t e r max ) m , and choose m from 1 to 4 to analyze the effect of parameter m on the performance of the explorer, as shown in Figure 2:

The selection of the m value affects the balance relationship between global and local searches by the sparrow explorer. As can be seen from Figure 2, each curve converges quickly in the early stage and slowly in the late stage. In order to select the most appropriate value of m, the Schaffer function was selected to test this parameter. The Schaffer function is characterized by the presence of a large number of local minima around the global optimum value of 3.14 range, and the function oscillates strongly. After changing the m value seven times and averaging each time by performing the optimization search calculation 50 times, a comparative table of the effect of the m value on the SSA algorithm was obtained, as shown in Table 1. It can be seen that the SSA algorithm can avoid the local minima of the Schaffer function and find the optimal value 0 due to the good performance of the SSA algorithm itself. Therefore, the influence of the value of m on the SSA algorithm is mainly reflected in the number of convergence generations; when the value of m increases, the number of tie convergences decreases at first and then increases, and the average number of convergence iterations is the smallest when the value of m is 2. In summary, taking m = 2 and parameter c can make the exploration ability and foraging ability of the SSA algorithm reach a better balance, when the explorer can fully and extensively search the target in the early stage and focus on the optimal location in the later stage.

3.5. Explorer-Follower Adaptive Adjustment Strategy

In the SSA algorithm, the ratio of the number of explorers to the number of followers is kept constant, which leads to a relatively small number of explorers in the early iterations and prevents an adequate search of the global. In the late iteration, the number of explorers is relatively large again, when more explorers are no longer needed for global search, and the number of followers needs to be increased for accurate local search. To solve this problem, this paper proposes an explorer-follower adaptive adjustment strategy, which can account for a majority of the population number in the early iterations. As the number of iterations increases, the number of explorers adaptively decreases and the number of followers adaptively increases, gradually shifting from global search to local exact search, improving the convergence accuracy of the algorithm as a whole. The number of explorers and followers is adjusted by the formula:

r = b ( tan ( π t 4 i t e r max + π 4 ) k r a n d ( 0 , 1 ) )

p N u m = r N

s N u m = ( 1 r ) N

where p N u m is the number of explorers, s N u m is the number of followers; b is the scaling factor to control the number between explorers and followers; k is the perturbation deviation factor, which perturbs the non-linearly decreasing r value.

3.6. Improved Sparrow Search Algorithm

The flow chart of ISSA algorithm is as shown in Figure 3.:

4. ISSA-ELM Prediction Model

Due to the randomly given input weight matrix and hidden layer deviation, some of the standard ELM values may be 0, which makes some hidden-layer nodes invalid and results in poor prediction and insufficient stability. To achieve accurate prediction, the number of hidden-layer nodes needs to be increased. However, an increase in the number of hidden-layer nodes can lead to poor adaptability of training samples and reduced generalization ability. Therefore, to ensure that the ELM has high prediction accuracy under the condition of the optimal number of hidden-layer nodes, the ISSA algorithm is used to optimize the ELM. With the help of the global search capabilities of ISSA, the ELM’s input weights and hidden-layer deviations are optimally searched, which not only enhances the stability of the ELM, but also does not reduce the convergence speed of the ELM. In the process of ELM training input samples, the various relationships between samples are further learned. The algorithm flowchart is presented in Figure 4.

5. Experimental Results and Analysis

5.1. Algorithm Tests

In order to validate the optimization-seeking ability and feasibility of the improved algorithm in this paper, ISSA is tested against PSO, GWO, WOA, SSO and SSA algorithms on 16 benchmark functions at the same time.

5.1.1. Benchmark Functions

The benchmark functions are shown in Table 2. Among them, f1–f6 are high-dimensional single-peaked functions, f7–f11 are high-dimensional multi-peaked functions, and f12–f16 are low-dimensional multi-peaked functions. The high-dimensional single-peaked functions have only one global optimum point and no local extreme value point, mainly to test the convergence speed of the function. Multi-peak functions have multiple local extrema, from both a high- and low-dimensional perspective, which are used to observe the performance of the function, in different dimensions, jumping out of the local extrema.

5.1.2. Convergence Accuracy and Stability Analysis

Simulation and comparison experiments of six algorithms were conducted and in order to avoid excessive chance errors, each benchmark function was selected to run 40 times independently in the experiments, and the best value, mean and standard deviation were taken as the evaluation index. The population size is set to 30 and the maximum number of iterations is 500 in the experiments; the final results obtained are shown in Table 3 with the best results in bold.

Firstly, among the six high-dimensional unimodal functions, ISSA finds the theoretical optimal value 0 when solving f1, f2 and f3 functions, and the standard deviation of f1 and f3 functions is also 0. In addition, only SSA obtains the optimal value 0 when solving f1. Secondly, when solving f4 and f5 functions, the optimal value, average value and standard deviation obtained by ISSA are at least 2 orders of magnitude higher than the other five algorithms. In solving f6 function, although the degree of improvement is not high, the optimal value and stability are still higher than other algorithms. Among the five high-dimensional multimodal functions, for the solution of f7, the optimization ability of the six algorithms is basically the same, and ISSA does not show the superiority of the algorithm. The solution of f8 and f10 can reach the optimal value in WOA and SSA, and the performance of the original algorithm is maintained in the improved ISSA. ISSA does not highlight its advantages in solving f9. Like SSA, ISSA cannot jump out of a certain extreme point after reaching the extreme point, but the standard deviation is 0 and has strong stability. In the high-dimensional multimodal function, only the solution of f11 highlights the improvement of the optimization performance of the ISSA algorithm. For five low-dimensional multimodal functions, f12–f15 can reach the optimal value under the solution of six algorithms, except F16, which shows that the algorithm can better optimize under the condition of low dimensionality. In f12, SSA and ISSA are less stable than the other four algorithms, but ISSA is more stable than SSA. The stability of ISSA in f13 is second only to WOA. The stability of ISSA is the best in f14 and f15. In f16, only GWO, SSA and ISSA can find the optimal solution, and the stability increases in turn.

The analysis shows that for each algorithm, the solution accuracy of unimodal function is higher than that of multimodal function, and the solution accuracy of low-dimensional function is higher than that of high-dimensional function. However, regardless of unimodal, multimodal, high-dimensional and low-dimensional function, ISSA not only improves the optimization accuracy, but also has better stability compared with the other five algorithms.

5.1.3. Wilcoxon Rank-Sum Test Analysis

Derrac et al. [30] proposed that statistical tests should be performed for the evaluation of improved algorithm performance. In other words, it is not enough to compare algorithm strengths and weaknesses based on mean and standard deviation values; statistical tests are needed to demonstrate that the proposed improved algorithm has significant improvement advantages over other existing algorithms. The stability and fairness of the algorithm is demonstrated by comparing the results independently in each case. In this paper, Wilcoxon rank-sum test is used at 5% significance level to determine whether each result of ISSA is statistically significantly different from the best results of the other five algorithms. When the p-value is less than 5%, the hypothesis is rejected, indicating that the comparison algorithms are significantly different; otherwise, the hypothesis is accepted, indicating that the superiority-seeking ability of the comparison algorithms is the same overall. Table 4 gives the p-values of the rank-sum test between ISSA and the other five algorithms under 16 benchmark functions, because when both comparison algorithms reach the best value, no comparison can be made. NaN in the table means “not applicable”, that is, no significance judgment can be made, and R is the result of significance judgment; “+”, “−”, and “=“ represent ISSA performance better than, worse than, and equivalent to the comparison algorithm, respectively.

From Table 4, it can be concluded that most of the p-values are much less than 5%, which indicates that the superiority of ISSA over the other five algorithms is statistically significant. In the comparison between ISSA and SSA, the R-value in the f13~f16 functions is “-” because of the better performance of SSA seeking itself. Both ISSA and SSA can find the best value, but only differ in the embodiment of the average value, and ISSA has some improvement in the seeking performance on the low-dimensional multi-peak function However, there is not much room for improvement.

5.1.4. Model Ablation Experiment

The various additive mechanisms of ISSA lead to a better performance of its advantage finding, but it is unknown exactly which additive mechanism works and whether a particular additive mechanism does not work. To show the role of various mechanisms and make the experiments more convincing, we conducted ablation experiments of the ISSA model. The comparison experiments were done with SSA, adaptive SSA (ASSA), variational SSA (MSSA), and ISSA on 16 benchmark functions and the final results are shown in Table 5.

As can be seen from Table 5, in the high-dimensional unimodal functions f1–f6, except f6, ASSA and MSSA in other functions play a certain role in the optimization performance. In f6, although ASSA and MSSA have no improvement effect compared with SSA, ISSA has the highest stability when they are combined. In the high-dimensional multimodal functions f7–f11: the test results of function f7 are similar to f6, and the stability is improved, but the optimization performance is not improved. All algorithms in function f8–f10 can find the optimal value. In function f11, the effects of ASSA and MSSA are better than SSA, and the effect of ISSA is the best. In the low-dimensional multimodal functions f12–f16, because each algorithm can approach the optimal value infinitely, the performance of ISSA is not improved much, and in f13, the performance of ASSA, MSSA and ISSA is not as good as SSA.

5.1.5. Time Complexity Analysis of ISSA

Assume that the population size in the algorithm is N; the dimension is D; the maximum number of iterations is i t e r max ; the time to randomly initialize the population parameters is s 1 ; the time to find the individual fitness value is j ( D ) ; the number of explorers is p N u m and the update time per dimension is s 2 ; the number of followers is s N u m and the update time per dimension is s 3 ; and the update time per dimension of the warning is s 4 . Therefore, the initial phase time complexity is: T 1 = O ( s 1 + N ( j ( D ) + D s 1 ) ) ; explorer update time complexity is: T 2 = O ( p N u m × s 2 D ) ; follower update time complexity is: T 3 = O ( s N u m × s 3 D ) ; and early warning update time complexity is: T 4 = O ( ( N p N u m s N u m ) s 4 D ) . In summary, the total time complexity of SSA is: T = T 1 + ( T 2 + T 3 + T 4 ) i t e r max = O ( D + j ( D ) ) .

In ISSA, the time required for cat mapping chaos is assumed to be u 1 and the sorting selection time is u 2 , so the initial-stage time complexity is: T 11 = O ( s 1 + N ( u 1 + j ( D ) + D s 1 ) + u 2 ) ; the explorer and follower number update formula is u3, then the explorer update time complexity is: T 22 = O ( p N u m × s 2 D + i t e r max u 3 ) . The time complexity of follower update is: T 33 = O ( s N u m × s 3 D + i t e r max u 3 ) ; the time complexity of warning update is: T 44 = O ( ( N p N u m s N u m ) s 4 D ) . In the Cauchy variation and tent chaos perturbation process, let the time to solve the favg be u 4 , and the time to calculate the perturbation formula and the Cauchy variation formula be u 5 and u 6 , respectively. The time to compare the sparrow fitness value with the average fitness value and to update the target position by merit are u 7 and u 8 , respectively, then the time complexity of this stage is: T 55 = O ( u 4 + u 5 + u 6 + N ( j ( D ) + u 7 ) + u 8 ) . In summary, the time complexity of ISSA is: T 1 = T 11 + ( T 22 + T 33 + T 44 + T 55 ) i t e r max = O ( D + j ( D ) ) . Since T = T 1 , it shows that the time complexity of ISSA does not increase.

In summary, the analysis demonstrates that each of the proposed strategies contributes to the final performance of the ISSA algorithm through model ablation experiments. The time complexity of ISSA is shown to be equivalent to SSA by time complexity analysis. It is concluded from the experiments of ISSA on three types of functions that although the performance of ISSA compared to SSA in the low-dimensional multi-peak function is not obvious, the performance of the search in the high-dimensional single-peak and high-dimensional multi-peak functions is at least two orders of magnitude higher than that of the other five algorithms, the convergence speed is also significantly improved, and the stability is enhanced. This fully indicates that the various performances of the ISSA algorithm are generally better than the remaining five algorithms, demonstrating the superiority and feasibility of ISSA.

5.2. Network Security Situation Prediction Experiment Based on ISSA-ELM

5.2.1. Experimental Environment and Data Preprocessing

The experimental data in this paper comes from the network environment built by Jiang et al. [31]. The network environment is shown in Figure 5. First, a real hacker attacks, and various attacks are simulated. Second, by counting the number and types of attacks, and assessing the degree of damage to the host after the attack, a network security assessment system is established to obtain the current network security situation value. Every 30 min of the experiment, statistics are collected and the network security situation value is evaluated and calculated. Finally, 150 situation values are selected to form the sample data, and the possibility of large errors is eliminated through normalization so that the values are in the range of [0, 1], as shown in Figure 6.

The parameters of the ELM neural network need to be determined during the experiment. Because the number of neurons in the input layer of the ELM represents the dimensional characteristics of the sample data, in this paper, corresponding to the network security situation value for a certain period of time, the sliding-window method is used for data input of the ELM. Network security situation analysis shows that the current situation of the network has some connection with the previous three to ten time periods. In order to not lose the relationship between these situations, the size of the sliding window is set to the maximum value of ten for the prediction experiment. The number of hidden-layer neurons has a certain impact on the final experimental results. Because the number of hidden-layer neurons usually does not exceed the number of input-layer neurons, in order to accurately find the relationship between the situation values, according to the idea of selecting the sliding-window value, we select 10 as the maximum number of neurons in the input layer. The purpose of this paper is to predict the future network situation, so the number of neurons in the output layer is 1, which represents the network security situation value in the next time period.

Because network security situation time series are constituted by a total of 150 consecutive samples of the situation value, and the sliding-window size is 10, a total of 150 − (10 + 1) + 1 = 140 samples are formed, of which 120 are training samples and 20 are test samples. The selection of the security situation dimension is shown in Table 6.

5.2.2. Analysis of Experimental Results

In order to evaluate the network situation prediction results, the mean relative error (MRE), mean square error (MSE), mean absolute error (MAE), and coefficient of determination ( ) are used as evaluation metrics. MRE reflects the credibility of the measurement. MSE evaluates the data variability: the smaller the value, the better the prediction accuracy. MAE reflects the actual situation of the predicted value errors. The evaluation metrics are expressed as follows:

M R E = 1 N i = 1 N | y i y ^ i y i | × 100 %

M S E = 1 N i = 1 N ( y i y ^ i ) 2

M A E = 1 N i = 1 N | y i y ^ i | × 100 %

R 2 = [ i = 1 N ( y i y ¯ ) ( y ^ i y ^ ¯ i ) ] 2 [ i = 1 N ( y i y ¯ ) 2 ] [ i = 1 N ( y ^ i y ^ ¯ i ) 2 ]

where y i is the actual value of the sample, y ^ i is the predicted value of the sample, N is the number of samples, y ¯ is the average of actual values, and y ^ ¯ i is the average of the predicted values.

The experimental results of ISSA-ELM are compared with those of the traditional ELM, GA-ELM, and SSA-ELM in Figure 7. The convergence curve of each algorithm is shown in Figure 8, and the comparison of evaluation metrics is shown in Table 7.

It can be concluded from Figure 7 that the prediction curve of ISSA-ELM basically matches the actual value and has a higher goodness-of-fit than other prediction models; moreover, the prediction error of ISSA-ELM is the smallest. In Figure 8, SSA-ELM stopped converging before the number of iterations reached 400, and its effectiveness is not as good as that of GA-ELM. Meanwhile, ISSA-ELM not only converges fast, but also has a better fitness value. In summary, through comparative analysis of the experimental results, it can be seen that for the same sliding-window size, the ISSA-ELM algorithm proposed in this paper has a higher goodness-of-fit in the network security situation training data than the GA-ELM and SSA-ELM algorithms. Specifically, it has accuracies that are 88.7%, 6.6%, and 24.4% higher than those of ELM, GA-ELM, and SSA-ELM, respectively.

6. Conclusions

To tackle the problem of accuracy in network security situation prediction, we introduce and improve the sparrow search algorithm based on the extreme learning machine and propose the ISSA-ELM model. The ELM neural network can quickly train samples while the ISSA optimizes its initial weights. Together, they can accurately predict the next network security situation. The improved ISSA can overcome the shortcoming of being prone to falling into local optima, it has good global convergence performance and robustness, shows better optimization capabilities, and has better overall performance than the original algorithm.

Experimental comparisons show that the ISSA-ELM model has certain advantages over GA-ELM and SSA-ELM in a real network environment: it has fast convergence speed, and higher prediction accuracy. However, the ISSA-ELM also has shortcomings. For example, there is great uncertainty in the hidden-layer node-selection process. Meanwhile, the sliding-window size is too large, leading to ISSA-ELM being prone to overfitting. Future studies should focus on the number of adaptive hidden-layer nodes required to further improve the convergence speed and prediction accuracy.

Author Contributions

Conceptualization, H.S. and J.W.; methodology, J.W.; software, H.S. and C.C.; validation, J.W.; formal analysis, J.W. and Z.L.; investigation, H.S. and J.W.; resources, H.S. and J.W.; data curation, J.W.; writing—original draft preparation, H.S. and Z.L.; writing— review and editing, J.W. and Z.L.; visualization, J.L.; supervision, J.L.; project administration, J.L.; funding acquisition, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (61806219, 61876189).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shi, L.Y.; Liu, J.; Liu, W.H.; Zhu, H.Q.; Duan, P.F. Survey of Research on Network Security Situation Awareness. Comput. Eng. Appl. 2019, 55, 1–9. [Google Scholar]
  2. Lai, J.B.; Wang, H.Q.; Zhu, L. Study of Network Security Situation Awareness Model Based on Simple Additive Weight and Grey Theory. Comput. Intell. Secur. 2006, 2, 545–1548. [Google Scholar]
  3. Hu, W.; Li, J.H.; Chen, X.Z.; Jiang, X.H. Network security situation prediction based on improved adaptive grey Verhulst model. J. Shanghai Jiaotong Univ. 2010, 15, 408–413. [Google Scholar] [CrossRef]
  4. Hu, H.L. Research on Data Fusion Technology for Network Security Awareness Based on D-S Evidence Theory. Ph.D. Thesis, National University of Defense Technology, Changsha, China, 2015. [Google Scholar]
  5. Sun, N.Q.; Yang, L. Intrusion Detection Based on Back-Propagation Neural Network and Feature Selection Mechanism. Future Gener. Inf. Technol. Lect. Notes Comput. Sci. 2009, 5899, 151–159. [Google Scholar]
  6. Zhang, L.; Liu, X.J.; Ma, J.; Sun, W.C.; Wang, X.F. The Prediction Algorithm of Network Security Situation Based on Grey Correlation Entropy Kalman Filtering. In Proceedings of the 2014 IEEE 7th Joint International Information Technology and Artificial Intelligence Conference, Chongqing, China, 20–21 December 2014. [Google Scholar]
  7. Wang, C.D.; Zhang, Y.K. Network Security Situation Evaluation Based on Modified D-S Evidence Theory. Wuhan Univ. J. Nat. Sci. 2014, 19, 409–416. [Google Scholar] [CrossRef]
  8. Ren, H.; Zhu, Y.J.; Wang, P.; Li, P.; Zhang, Y.Q.; Wang, X.Z.; Li, Y.Y.; Gong, F.Q. Classification and Application of Roof Stability of Bolt Supporting Coal Roadway Based on BP Neural Network. Adv. Civ. Eng. 2020, 2020, 8838640. [Google Scholar] [CrossRef]
  9. Zhu, S.L.; Salim, H. Prediction of dissolved oxygen in urban rivers at the Three Gorges Reservoir, China: Extreme learning machines (ELM) versus artificial neural network (ANN). Water Qual. Res. J. 2020, 55, 106–118. [Google Scholar] [CrossRef]
  10. Yang, L.N.; Sun, X.; Li, Z.L. An efficient framework for remote sensing parallel processing: Integrating the artificial bee colony algorithm and multiagent technology. Remote Sens. 2019, 11, 152. [Google Scholar] [CrossRef] [Green Version]
  11. Hu, P.; Pan, J.S.; Chu, S.C. Improved binary grey wolf optimizer and its application for feature selection. Knowl.-Based Syst. 2020, 195, 105746. [Google Scholar] [CrossRef]
  12. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the International Conference on Neural Networks, Perth, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  13. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  14. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  15. Mirjalili, S. SCA: A sine cosine algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  16. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  17. Xue, J.K.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  18. Liu, J.S.; Yuan, M.M.; Zuo, F. Global search-oriented adaptive leader salp swarm algorithm. Control Decis. 2021, 36, 2152–2160. [Google Scholar]
  19. Zhou, J.; Wang, L.; Chen, X.Q. Image segmentation of 2-D maximum entropy based on the improved whale optimization algorithm. Intell. Comput. Appl. 2020, 10, 71–75. [Google Scholar]
  20. Zhou, J. Workshop used robot navigation path planning method based on chaotic wolf pack besieging algorithm. Mach. Des. Manuf. 2020, 251–255. [Google Scholar]
  21. Zhang, J.; Wang, J.S. Improved whale optimization algorithm based on nonlinear adaptive weight and golden sine operator. IEEE Access 2020, 8, 77013–77048. [Google Scholar] [CrossRef]
  22. Zhang, B.; Zheng, Y.J.; Zhang, M.X.; Chen, S.Y. Fireworks Algorithm with Enhanced Fireworks Interaction. IEEE/ACM Trans. Comput. Biol. Bioinform. 2015, 14, 42–55. [Google Scholar] [CrossRef]
  23. Chengtian, O.; Yujia, L.; Donglin, Z. An adaptive chaotic sparrow search optimization algorithm. In Proceedings of the 2021 IEEE 2nd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE), Nanchang, China, 26–28 March 2021; pp. 76–82. [Google Scholar] [CrossRef]
  24. Liang, Q.; Chen, B.; Wu, H.; Han, M. A Novel Modified Sparrow Search Algorithm Based on Adaptive Weight and Improved Boundary Constraints. In Proceedings of the 2021 IEEE 6th International Conference on Computer and Communication Systems (ICCCS), Chengdu, China, 23–26 April 2021; pp. 104–109. [Google Scholar] [CrossRef]
  25. Krishnan, G.S.; Kamath, S. A novel GA-ELM model for patient-specific mortality prediction over large-scale lab event data. Appl. Soft Comput. 2019, 80, 525–533. [Google Scholar] [CrossRef]
  26. Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neurocomputing 2005, 70, 489–501. [Google Scholar] [CrossRef]
  27. Tang, Y.; Li, C.; Li, S.; Cao, B.; Chen, C. A Fusion Crossover Mutation Sparrow Search Algorithm. Math. Probl. Eng. 2021, 2021, 9952606. [Google Scholar] [CrossRef]
  28. Chen, R.; Wang, S.Y. An optimization method for an integrated energy system scheduling process based on NSGA-II improved by tent mapping chaotic algorithms. Processes 2020, 8, 426. [Google Scholar] [CrossRef] [Green Version]
  29. Guo, Z.Z.; Wang, P.; May, Y.F.; Wang, Q.; Gong, C.Q. Whale optimization algorithm based on adaptive weight and cauchy mutation. Microelectron. Comput. 2017, 34, 20–25. [Google Scholar]
  30. Derrac, J.; Garcia, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  31. Jiang, Y.; Li, C.H.; Wei, X.H.; Li, Z.P. Research on Network Security Situation Prediction Based on RBF Optimized by Improved PSO. Meas. Control Technol. 2018, 37, 56–60. [Google Scholar]

ISSA-ELM: A Network Security Situation Prediction Model (1)

Figure 1. Network structure of ELM.

Figure 1. Network structure of ELM.

ISSA-ELM: A Network Security Situation Prediction Model (2)

ISSA-ELM: A Network Security Situation Prediction Model (3)

Figure 2. Change curve of c.

Figure 2. Change curve of c.

ISSA-ELM: A Network Security Situation Prediction Model (4)

ISSA-ELM: A Network Security Situation Prediction Model (5)

Figure 3. Flow chart of ISSA.

Figure 3. Flow chart of ISSA.

ISSA-ELM: A Network Security Situation Prediction Model (6)

ISSA-ELM: A Network Security Situation Prediction Model (7)

Figure 4. Flow chart of ISSA-ELM prediction.

Figure 4. Flow chart of ISSA-ELM prediction.

ISSA-ELM: A Network Security Situation Prediction Model (8)

ISSA-ELM: A Network Security Situation Prediction Model (9)

Figure 5. Network experiment environment diagram.

Figure 5. Network experiment environment diagram.

ISSA-ELM: A Network Security Situation Prediction Model (10)

ISSA-ELM: A Network Security Situation Prediction Model (11)

Figure 6. Network security situation value.

Figure 6. Network security situation value.

ISSA-ELM: A Network Security Situation Prediction Model (12)

ISSA-ELM: A Network Security Situation Prediction Model (13)

Figure 7. Comparison of experimental results. (a) Predicted value; (b) Error comparison.

Figure 7. Comparison of experimental results. (a) Predicted value; (b) Error comparison.

ISSA-ELM: A Network Security Situation Prediction Model (14)

ISSA-ELM: A Network Security Situation Prediction Model (15)

Figure 8. The comparison of fitness curves.

Figure 8. The comparison of fitness curves.

ISSA-ELM: A Network Security Situation Prediction Model (16)

ISSA-ELM: A Network Security Situation Prediction Model (17)

Table 1. Influence of parameter m on SSA.

Table 1. Influence of parameter m on SSA.

mBestMeanStdAverage Number of Convergences
1.005.509 × 10−723.120 × 10−71891
1.5000723
2.0000151
2.507.505 × 10−354.747 × 10−34289
3.0000224
3.5000557
4.0000292

ISSA-ELM: A Network Security Situation Prediction Model (18)

Table 2. Benchmark function.

Table 2. Benchmark function.

FunctionFormulaDimDomainBest
Sphere f 1 ( x ) = i = 1 n x i 2 30[−100, 100]0
Schwefel’s f 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | 30[−10, 10]0
Quadric f 3 ( x ) = i = 1 n ( j = 1 n x j ) 2 30[−100, 100]0
Rosenbrock f 4 ( x ) = i = 1 n 1 [ 100 ( x i + 1 x i 2 ) 2 + ( x i 1 ) 2 ] 30[−30, 30]0
Step f 5 ( x ) = i = 1 n ( | x i + 0.5 | ) 2 30[−100, 100]0
Quartic f 6 ( x ) = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ] 30[−1.28, 1.28]0
Schwefel f 7 ( x ) = i = 1 n x i sin ( | x i | ) 30[−500,500]−418.9829n
Rastrigrin f 8 ( x ) = i = 1 n [ x i 2 10 cos ( 2 π x i ) + 10 ] 30[−5.12, 5.12]0
Ackley f 9 ( x ) = 20 exp ( 0 . 2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos ( 2 π x i ) ) + 20 + e 30[−32, 32]0
Griewing f 10 ( x ) = 1 4000 i = 1 n x i 2 i = 1 n cos ( x i i ) + 1 30[−600, 600]0
Generalized penalized f 11 ( x ) = π n { 10 sin 2 ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] } + ( y n 1 ) 2 30[−50, 50]0
Foxholes f 12 ( x ) = ( 1 500 + j = 1 25 1 j + i = 1 2 ( x i a i j ) 6 ) 1 2[−65, 65]1
Hartmann 6-D f 13 ( x ) = i = 1 4 c i exp ( j = 1 6 a i j ( x j p i j ) 2 ) 6[0, 1]−3.32237
Schkel f 14 ( x ) = i = 1 10 [ ( X a i ) ( X a i ) T + c i ] 1 4[0, 10]−10.5363
Six-Hump Camel f 15 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 4 x 2 4 2[−5, 5]−1.0316
Kowalik f 16 ( x ) = i = 1 11 [ a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 ] 2 4[−5, 5]0.000307

ISSA-ELM: A Network Security Situation Prediction Model (19)

Table 3. Comparison of benchmark function results.

Table 3. Comparison of benchmark function results.

Algorithmf1f2
BestMeanStdBestMeanStd
PSO1.057× 10−51.422× 10−42.013× 10−45.100× 10−34.260× 10−26.160× 10−2
GWO2.026× 10−291.522× 10−271.998× 10−272.312× 10−178.104× 10−173.908× 10−17
WOA2.579× 10−876.403× 10−732.573× 10−724.803× 10−581.678× 10−517.650× 10−51
SSA12.628× 10−81.302× 10−71.163× 10−74.260× 10−22.3711.860
SSA0.000E+001.227× 10−517.761× 10−511.129× 10−1185.065× 10−313.185× 10−30
ISSA01.010× 10−182004.831× 10−352.656× 10−34
Algorithmf3f4
BestMeanStdBestMeanStd
PSO25.6973.6336.4215.6399.1557.19
GWO2.630× 10-97.351× 10-61.307× 10-626.126.936.457× 10-1
WOA3610482201793027.0727.984.642× 10-1
SSA12331467767.724.3165.9259.3
SSA2.774× 10-1394.129× 10-291.628× 10-286.711× 10-93.419× 10-51.144× 10-4
ISSA01.168× 10-16701.658× 10-138.871× 10-73.077× 10-6
Algorithmf5f6
BestMeanStdBestMeanStd
PSO1.008× 10-51.894× 10-43.878× 10-44.340× 10-21.793× 10-15.150× 10-2
GWO7.462× 10-57.011× 10-13.678× 10-13.836× 10-41.900× 10-38.773× 10-4
WOA7.440× 10-24.005× 10-12.182× 10-13.416× 10-52.700× 10-33.400× 10-3
SSA12.344× 10-82.011× 10-72.792× 10-76.170× 10-21.766× 10-16.360× 10-2
SSA1.179× 10-141.538× 10-113.735× 10-118.384× 10-51.700× 10-31.400× 10-3
ISSA2.101× 10-237.744× 10-151.533× 10-142.468× 10-68.001× 10-47.674× 10-4
Algorithmf7f8
BestMeanStdBestMeanStd
PSO−7082−4601110836.0960.2614.6
GWO−7586−5865907.403.1744.412
WOA−12570−105701769000
SSA1−9017−7584660.723.8853.1320.77
SSA−9618−8525541.5000
ISSA−8839−6541672.4000
Algorithmf9f10
BestMeanStdBestMeanStd
PSO2.200× 10-31.381× 10-13.759× 10-11.354× 10-61.030× 10-19.200× 10-3
GWO7.550× 10-141.021× 10-131.668× 10-1403.200× 10-38.000× 10-3
WOA8.882× 10-164.530× 10-152.955× 10-15000
SSA19.313× 10-12.6481.2456.545× 10-41.330× 10-21.050× 10-2
SSA8.882× 10-168.882× 10-160000
ISSA8.882× 10-168.882× 10-160000
Algorithmf11f12
BestMeanStdBestMeanStd
PSO9.907× 10-81.040× 10-23.930× 10-29.980× 10-13.6352.619
GWO1.280× 10-24.300× 10-21.470× 10-29.980× 10-13.7923.810
WOA4.500× 10-32.920× 10-24.220× 10-29.980× 10-12.8383.215
SSA12.1156.8323.7549.980× 10-11.0973.762× 10-1
SSA2.785× 10-169.587× 10-133.536× 10-129.980× 10-14.8475.242
ISSA1.335× 10-223.992× 10-158.391× 10-159.980× 10-19.6114.962
Algorithmf13f14
BestMeanStdBestMeanStd
PSO−3.322−3.2745.900× 10-2−10.54−9.1652.805
GWO−3.322−3.2548.450× 10-2−10.54−10.331.283
WOA−3.322−3.2091.158× 10-2−10.54−7.1193.386
SSA1−3.322−3.2206.150× 10-2−10.54−8.4223.343
SSA−3.322−3.2805.740× 10-2−10.54−8.5082.652
ISSA-3.322−3.2163.114× 10-2−10.54−10.066.803× 10-15
Algorithmf15f16
BestMeanStdBestMeanStd
PSO−1.032−1.0322.043× 10-163.275× 10-48.612× 10-41.552E× 10-4
GWO−1.032−1.0322.281× 10-83.075× 10-43.900× 10-37.700× 10-3
WOA−1.032−1.0325.898× 10-103.229× 10-47.261× 10-44.383× 10-4
SSA1−1.032−1.0323.233× 10-144.024× 10-41.400× 10-33.100× 10-3
SSA−1.032−1.0321.067× 10-163.075× 10-43.219× 10-45.463× 10-5
ISSA−1.032−1.0322.073× 10-183.075× 10-43.075× 10-48.314× 10-10

ISSA-ELM: A Network Security Situation Prediction Model (20)

Table 4. p-value for Wilcoxon’s rank-sum test.

Table 4. p-value for Wilcoxon’s rank-sum test.

FunctionPSOGWOWOASSA1SSA
pRpRpRpRpR
f18.2567× 10−15+8.2567× 10−15+8.6499× 10−11+9.5500× 10−15+5.5707× 10−10+
f21.1765× 10−14+1.2616× 10−14+1.8551× 10−5+1.1765× 10−14+1.9884× 10−7+
f31.2970× 10−14+9.5500× 10−15+9.5500× 10−15+9.5500× 10−15+2.6945× 10−7+
f41.4351× 10−14+1.4351× 10−14+1.4351× 10−14+1.4351× 10−14+1.9493× 10−10+
f51.4351× 10−14+1.4351× 10−14+1.4351× 10−14+1.4351× 10−14+8.4081× 10−13+
f61.4351× 10−14+6.5400× 10−2-6.9401× 10−4+1.4351× 10−14+9.5780× 10−1-
f73.4621× 10−11+2.4000× 10−3+2.8321× 10−10+9.9259× 10−7+2.8951E-13+
f81.9667× 10−16+1.9035× 10−16+NaN=1.9667× 10−16+NaN=
f91.9667× 10−16+1.7706× 10−16+1.9034× 10−11+1.9667× 10−16+NaN=
f101.9667× 10−16+4.1870× 10−4+NaN=1.9667× 10−16+NaN=
f111.4351× 10−14+1.4351× 10−14+1.4351× 10−14+1.4351× 10−14+5.1215× 10−11+
f121.1761× 10−5+1.1234× 10−1-5.1000× 10−3+2.9017× 10−8+7.0000× 10−3+
f133.7700× 10−2+2.2793× 10−8+9.8876× 10−12+6.3229× 10−13+3.4070× 10−1-
f141.6200× 10−2+7.1850× 10−5+2.6114× 10−6+9.0250× 10−7+5.6820× 10−1-
f159.6055× 10−8+2.9329× 10−15+2.9329× 10−15+6.8851× 10−15+6.3210× 10−1-
f161.4351× 10−14+8.6608× 10−12+1.0585× 10−13+1.7977× 10−14+6.6850× 10−1-

ISSA-ELM: A Network Security Situation Prediction Model (21)

Table 5. Experimental results of model ablation.

Table 5. Experimental results of model ablation.

Algorithmf1f2f3
MeanStdMeanStdMeanStd
SSA2.526× 10−601.050× 10−591.212× 10−267.105× 10−267.076× 10−263.363× 10−25
ASSA1.690× 10−751.069× 10−749.746× 10−296.164× 10−285.576× 10−291.763× 10−28
MSSA1.679× 10−671.032× 10−664.333× 10−371.978× 10−365.084× 10−291.605× 10−28
ISSA1.034× 10−18101.966× 10−371.243× 10−361.026× 10−1553.243× 10−155
Algorithmf4f5f6
MeanStdMeanStdMeanStd
SSA9.633× 10−52.729× 10−45.003× 10−111.924× 10−101.600× 10−31.300× 10−3
ASSA3.436× 10−68.914× 10−62.327× 10−143.221× 10−141.600× 10−31.800× 10−3
MSSA3.527× 10−67.135× 10−62.300× 10−143.999× 10−142.000× 10−31.800× 10−3
ISSA4.318× 10−78.616× 10−72.029× 10−143.859× 10−141.400× 10−38.627× 10−4
Algorithmf7f8f9
MeanStdMeanStdMeanStd
SSA−8463531.4008.882× 10−160
ASSA−8492592.1008.882× 10−160
MSSA−8726907.5008.882× 10−160
ISSA−8759405.2008.882× 10−160
Algorithmf10f11f12
MeanStdMeanStdMeanStd
SSA003.474× 10−121.105× 10−116.265.454
ASSA006.636× 10−151.377× 10−147.625.746
MSSA001.356× 10−152.427× 10−159.980× 10−11.480× 10−16
ISSA001.045× 10−158.414× 10−169.980× 10−11.655× 10−16
Algorithmf13f14f15
MeanStdMeanStdMeanStd
SSA−3.2636.270× 10−2−8.3732.793−1.0321.958× 10−16
ASSA−3.2746.140× 10−2−8.9142.612−1.0321.958× 10−16
MSSA−3.2985.010× 10−2−8.9142.612−1.0322.094× 10−16
ISSA−3.2746.140× 10−2−9.4552.280−1.0321.958× 10−16
Algorithmf16
MeanStd
SSA3.456× 10−48.718× 10−5
ASSA3.404× 10−41.040× 10−4
MSSA3.215× 10−46.452× 10−5
ISSA3.075× 10−48.442× 10−10

ISSA-ELM: A Network Security Situation Prediction Model (22)

Table 6. Selection of security situation dimension.

Table 6. Selection of security situation dimension.

Input SamplesOutput Samples
x1, x2, x3, x4, x5, x6, x7, x8, x9, x10x11
x2, x3, x4, x5, x6, x7, x8, x9, x10,x11x12
x110, x111, x112, x113, x114, x115, x116, x117, x118,x119x120

ISSA-ELM: A Network Security Situation Prediction Model (23)

Table 7. Comparison of evaluation indexes.

Table 7. Comparison of evaluation indexes.

AlgorithmMREMSEMAER2
ELM0.322910.00521340.0587610.11032
GA-ELM0.132480.000877580.0257710.90944
SSA-ELM0.197230.0019450.03870.73668
ISSA-ELM0.0554770.000153880.0100650.97399

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.


© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
ISSA-ELM: A Network Security Situation Prediction Model (2024)

References

Top Articles
CAMRip, HDTS, DVDRip, BDRip, HDTVRip y WEBRip -
The Meaning Behind The Song: NY State Of Mind by Nas - Beat Crave
Spasa Parish
Brokensilenze Website
Leah4Sci Alkene Reactions
Tales From The Crib Keeper 14
Krdo Weather Closures
Nail Salons Open Now Near My Location
Buy Quaaludes Online
Jared Isaacman e Sarah Gillis: quem são os primeiros civis a caminhar no espaço
Between Friends Comic Strip Today
Al Horford House Brookline
Sinai Web Scheduler
Herman Kinn Funeral Home Obituaries
Thompson Center Thunderhawk Parts
Swgoh Boba Fett Counter
Crazy 8S Cool Math
Praxis für Psychotherapie und Coaching Rhein-Neckar
Enloe Bell Schedule
Punishment - Chapter 1 - Go_mi - 鬼滅の刃
Craigslist Manhattan Ks Personals
Phumikhmer 2022
Ghostbusters Afterlife 123Movies
Quantumonline
We Take a Look at Dating Site ThaiFlirting.com in Our Review
Mega Millions Lottery - Winning Numbers & Results
That Is No Sword X Kakushi By Nez_R
Cambria County Most Wanted 2022
Milwaukee Nickname Crossword Clue
Core Relief Texas
The Parking Point Jfk Photos
Prot Pally Wrath Pre Patch
Trailmaster Fahrwerk - nivatechnik.de
Solarmovies Rick And Morty
Target Minute Clinic Hours
Used Golf Clubs On Craigslist
Donald Vacanti Obituary
5128 Se Bybee Blvd
Opsb Pay Dates
Craigslist Boats For Sale By Owner Sacramento
Joftens Notes Skyrim
Family Link from Google - Family Safety & Parental Control Tools
Healthstream Mobile Infirmary
Chess Unblocked Games 66
Craigslist Lasalle County Il
The Complete Guide to Flagstaff, Arizona
Brokaw 24 Hour Fitness
Stock Hill Restaurant Week Menu
Umn Biology
High Balance Bins 2023
Timothy Warren Cobb Obituary
Caldo Tlalpeño de Pollo: Sabor Mexicano - Paulina Cocina
Latest Posts
Article information

Author: Mr. See Jast

Last Updated:

Views: 5869

Rating: 4.4 / 5 (75 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Mr. See Jast

Birthday: 1999-07-30

Address: 8409 Megan Mountain, New Mathew, MT 44997-8193

Phone: +5023589614038

Job: Chief Executive

Hobby: Leather crafting, Flag Football, Candle making, Flying, Poi, Gunsmithing, Swimming

Introduction: My name is Mr. See Jast, I am a open, jolly, gorgeous, courageous, inexpensive, friendly, homely person who loves writing and wants to share my knowledge and understanding with you.