RECOGNITION OF MINE WATER-INRUSH SOURCES USING ARTIFICIAL NEURAL NETWORK

: In the process of excavating and mining, water-inrush episodes induced by a number of geological or human factors is a complex geological hazard and often lead to disastrous consequences, making an accurate prediction before an inrush accident is difficult because there are so many factors and interactions between factors are related in such hazard, No matter how accurate a risk assessment approach is, it can not 100% guarantee that every water inrush accident can be accurately predicted. Until so far, inrush accidents are still occurring every year all over the world, especially in developing countries. For inrush accidents in underground mining, the first and also the critical step of controlling the accident is to find out the related inrush sources, accurately identifying which aquifer or which water body is directly related to the inrush accident is the critical step of controlling water volume and reducing casualties and economic losses. In this study, method of using artificial neural network (ANN) to identify the water-inrush sources is proposed, by establishing a back propagation neural network (BPNN) to train, test and predict the sample data selected from Jiaozuo mine area, results show that ANN is an effective approach to identify water sources.


INTRODUCTION
Complex hydrogeological conditions in mining operations often lead to sudden water inrush accidents, causing heavy losses of human life and economy.According to statis-_________ tics (Guomei An 2011), from the beginning of the 1980s, in China, nearly 250 mines were flooded by water and almost 9000 people lost their lives.Only from 2006 to 2010, 306 water accidents were occurred in China's major coal mines and resulted in 1325 deaths.
Water inrush in underground mining is a complex geological hazard related to many geological factors and factor interactions.To date there have been lots of investigations and studies (Shi and Singh 2001;Motyka and Bosch 1985;Lu et al. 2015;Mokhov 2007) have been conducted all over the world to research the specific mechanisms of water hazards, but until now the precise mechanisms of instantaneous inrush are still unresolved.Besides there have been also lots of studies (Wu et al. 2011a;Wu et al. 2011b;Dumpleton et al. 2001;Dong et al. 2012) about using different techniques to predict the occurrence of water inrush, but until now the prediction approach are still unreliable.Currently, inrush accidents are still occurring every year all over the world, especially in developing countries.
Under the cases of the mechanisms are still unresolved and the prediction techniques are still unreliable, for inrush accidents in underground mining, the most effective and critical way of controlling the accident is to find out the related inrush sources, accurately identifying which aquifer or which water body is directly related to the inrush accident can help to estimate the inrush scale and then to take further pumping measures to manage the inrush accident.In this study, an approach of using artificial neural network (ANN) to identify the water-inrush sources will be proposed and implemented.

METHODOLOGY OF ANN
2.1.BACK PROPAGATION NEURAL NETWORK Among all neural networks, BP neural network (BPNN), independently discovered by Rumelhart, Hinton and Williams (1986) and Parker (1985) has been widely used after a detailed description by Rumelhart and McClelland (1998).It is a kind of forward neural network with multi-layer neurons.Each layer contains a number of neurons, and neurons between the layers are interconnected by weights and thresholds.It belongs to a kind of supervised learning algorithm and the essence of the algorithm is by constantly comparing the network outputs with target vectors to adjust the network weights and thresholds to achieve the minimization of the mean square error (MSE).

DETAILED ALGORITHM OF BPNN
To ensure BPNN has a strong capability of learning and predicting, establishing an appropriate neural network based on the actual project is required.Once the structure of the BPNN in views of practical project is determined, a mapping relation between input and output therefore can be created if enough training sample sets are used for training the network.With the created mapping relation, the network then can be used for new sample prediction.This is how the network works from training (or learning) to prediction.The detailed descriptions about the algorithm of BPNN will be given in the following steps (also shown in Fig. 1).
Step one: according to the input vector p = (p1, p2, …, pn) and the target vector t = (t1, t2, …, tq).The nodes of the input layer, the hidden layer and the output layer can be determined, supposing the node numbers for these three layers are n, m and q, respectively.Initializing the connection weights and the thresholds (wij, wjk, a and b) as tiny values, generally they can be obtained by randomizer of the computer.
Step two: according to the input vector p, the output of the hidden layer yj can be calculated by Eq. (1) as: In this formula: wij denotes the connection weights from the input layer to the hidden layer, and aj is the threshold of the hidden layer, f represents the activation function of the hidden layer and the mathematical expression is f (x) = (1 + e -x ) -1 .
Step three: according to yj, outputs of the output layer can be calculated by Eq. (2) as: In equation ( 2), wjk denotes the weights from the hidden layer to the output layer, bk is the threshold of the output layer, g is the activation function of the output layer and the mathematical expression is g(x) = x.
Step four: supposing that is defined as the error function of the network, then the increments of the weights and the thresholds can be calculated by Eq. (3).
where  denotes the rate of network learning, which is supposed as a small value in interval [0, 1].
Step six: Through Eq. ( 8) the global error is calculated to know whether the global error reaches the accuracy requirement.If it does not reach the minimum MSE, coming back to step (2).Of course, iteration will come to end if it does.Different aquifers usually have distinct hydrochemical characteristics because either they are supplied by different water sources or their draining conditions are com-pletely different.For example, aquifers below the floor always have a high concentration of Ca 2+ , Na + , K + , Mg 2+ and HCO -, in mined-out areas, water always has a high content of SO4 2-because in mined-out area there is a long period of closing state.Therefore, through analyzing the hydrochemical characteristic of water samples of all the related aquifers to identify which aquifer is related to the inrush accident is reasonable.
In this paper, Jiaozuo mine area located in Henan province in China is selected as a case study to show how ANN is exactly used for water source recognition.In Jiaozuo mine area, 9 major producing mines are frequently threatened by waterinrush accidents and a total of more than 700 times of water inrushes have been occurred in the coal production history, among them there were 61 times the inrush-water volume was greater than 10 m 3 /min and the maximum inrushing volume up to 320 m 3 /min.The Quaternary sandstone aquifer, the Permian shale aquifer, the Ordovician limestone aquifer and the Carboniferous limestone aquifer are four of the aquifers related to most of the water inrush incidents.In 2003, based on 39 water samples collected from four major aquifers in this mine area, a complex mathematical model was established by Zhang (2003) to achieve water-inrush sources recognition, but the author's model was so complicated, therefore, in this paper, we try to establish a BPNN (Fig. 2) to replace his model and to make sources recognition easier.
The BPNN model used in this study consists of three layers, and the number of neurons for each layer is 6, 20 and 4 respectively.Where the 6 neurons in input layer denotes the concentrations of Na + + K + , Ca 2+ , Mg 2+ , Cl -, SO 4 2-, HCO 3 -, 20 is the number of neurons of the hidden layer and the 4 neurons in output layer represent 4 elements of the output vector which is expressed as X = (X1, X2, X3, X4).
Selecting 35 sample (Table 1) sets as training data from all of the 39 water samples and choosing the remaining 4 as the forecasting objects.35 samples that belong to 4 aquifers are divided into 4 different vectors in the form of T = (T1, T2, T3, T4), which are also the target vectors of neural network.For example, vector (1, 0, 0, 0) is used for representing the 6 water samples which belong to the Ordovician limestone aquifer (AquiferⅠ), vector (0, 1, 0, 0) indicates the 12 samples of the Carboniferous limestone aquifer (AquiferⅡ), vector (0, 0, 1, 0) denotes the 9 water samples which belong to the Permian shale aquifer in the roof (Aquifer Ⅲ) and vector (0, 0, 0, 1) is used for representing the 8 water samples of the Quaternary sandstone aquifer (Aquifer Ⅳ).After establishing the structure of the network and determining the training samples, based on algorithm described in Section 3.2, MATLAB code is written to achieve BPNN testing and forecasting.Setting the goal of the minimum MSE of the network as 1 × 10 -7 , debugging the program to train the samples, the training process is shown in Fig. 3, as can be seen from this figure the network is converged to the minimum MSE at 2.3 × 10 -8 after 17 training cycles, that is to say, the mapping relation between input and output is established after 17 steps of iteration.After the training process is finished, the next step is to test the network.Here the samples in Table 1 once again are used for testing the network, 2 samples of each aquifer are randomly selected for network testing, and the testing results are shown in Table 2. Comparing every element of the output vectors with elements of that 4 target vectors, the output vectors then can be classified as corresponding aquifer.The final results indicate that the correct rate of the recognition is 100%.All of the 8 selected aquifers are correctly identified, which indicates that the trained network has reached the prediction requirements.Output vectors X1 1.00 1.00 0.00 0.00 0.00 0.00 0.09 0.01 X2 0.00 0.00 1.00 1.00 0.00 0.00 0.01 0.01 X3 0.00 0.00 0.00 0.00 1.00 1.00 0.02 0.00 X4 0.00 0.00 0.00 0.00 0.00 0.00 0. Among the 39 water samples in Table 1, 35 of them are used for training and testing the neural network, so there are still 4 remaining samples (see Table 3) can be used for testing the prediction performance of the network.The first step of the prediction process is to use the 4 remaining samples as new input data to calculate the corresponding output vectors, and then by comparing each element of the output vectors with the target vectors, each of the four samples belongs to which aquifer therefore can be identified.The final output vectors and prediction results are shown in Table 4.As can be seen from the Table that the prediction results are fully consistent with the actual situations, this indicates that ANN is a reliable approach to identify water sources from many different related aquifers, and it can be recommend for using in water inrush accident to identify water sources.

CONCLUSION
Based on the problem of how to identify the water source quickly and accurately when a water inrush occurs in the process of mining to avoid more casualties and to control the disaster, this study proposed and developed an approach of using the ANN to achieve water source identification.From our case study in in this study in Jiaozuo mine area, the final results indicates that ANN is an ideal tool to realize water source recognition, the model is simple and the recognition result is accurate.Compared with the method of hydrochemical analysis, the ANN method is more objective because the process of training and forecasting is based on objective samples rather than engineer's subjective judgements, it can be recommend to use in more mining practices to identify water sources when there is a water inrush incident.

Fig. 1 .
Fig. 1.The structure of BP neural network and its algorithm in detail

Fig. 2 .
Fig. 2. BP neural network model of water sources recognition

Table 1 .
The training sample sets and their vector representation

Table 2 .
The testing results with samples randomly selected from training samples

Table 3 .
The remaining sample sets and their vector representations