1 Introduction
Artificial neural network is a neural network that can be artificially constructed based on human understanding of its brain neural network. It is a mathematical model of the theoretical human brain neural network. It is an information processing system based on the structure and function of the brain neural network. Due to its self-organization, self-learning ability and distributed storage and parallel processing with information, the characteristics of information storage and processing have been widely concerned, and hundreds of artificial neural networks have been developed.
In general, artificial neural networks can be divided into two types: forward networks and feedback networks. Typical forward networks include single-layer perceptrons, BP networks, etc., and feedback networks include the Hopefield network.
Artificial neural network has been widely used in pattern recognition, signal processing, expert system, optimization combination, intelligent control and other aspects. The use of artificial neural network for pattern recognition has some advantages that traditional technology does not have: good fault tolerance [2j, Classification ability, parallel processing capability and self-learning ability, and it runs fast, has good adaptive performance and has high resolution. Single layer perceptrons, BP networks, and Hopfield networks can all be used for character recognition.
In this paper, by using the perceptron network, BP network and Hopfield feedback network to identify 26 English letters, the error rate of each recognition is given through experiments. By comparison, the recognition ability of these three neural networks can be seen. And their respective advantages and disadvantages.
2 Character recognition problem description and pre-processing before network recognition
Character recognition is becoming more and more widely used in modern daily life, such as automatic license plate recognition system [3, 4], handwriting recognition system [5], office automation, etc. [6]. This paper uses a single-layer perceptron, BP network and Hopfield network to identify 26 English letters. First, each of the 26 letters to be identified is digitized by a square having a length and a width of 7 × 5, respectively, and represented by a vector. The corresponding data position is set to 1, and the other positions are set to 0. Figure 1 shows the digitization process of the letters A, B and C, where the leftmost digitized result of the digitized processing of the letter A is: IetterA~"00100010100101010001111111000110001]', from which 35 letters can be obtained for each letter. Compose a vector. The input vector consisting of 26 standard letters is defined as an input vector matrix alphabet, that is, the sample of the neural network is input as a 35×26 matrix. Among them, alphabet=[letterA, letterB, lettere,... letterZj. The network sample output needs to distinguish the output vector from 26 input letters. For any input letter, the network output has a value of 1 in the order corresponding to the letter, and the rest is O, that is, the network output matrix is ​​diagonal. The line is a 26×26 unit matrix of 1 and defines target=eye(26).
There are two types of such data as input in this paper: one is the ideal standard input signal; the other is to add the noise signal in the MATLAB toolbox to the standard input signal, namely the randn function.
3 Network design of identifying characters and its experimental analysis
3.1 Single layer sensor design and its recognition effect
Select 35 input nodes and 26 output nodes of the network, set the target error to 0.0001, and the maximum number of trainings is 40. The designed network causes the output vector to output 1 at the correct position and O at other locations. Firstly, the medical paper is first trained with the ideal input signal to obtain the noiseless training result. Then, two sets of standard input vectors and two sets of input vector training networks with random noise are used to ensure that the network has the ideal input. The ability of people and noise to be classified. After the network training, in order to ensure that the network can accurately identify the ideal characters, and then use the noise-free standard input training network, and finally get the ability to identify the network with noise input. The next step is to perform a performance test on the designed network: enter any letter into the network and add a mean value to it. The noise of ~0.2 randomly generates 100 input vectors, and experiments on the letter recognition error rates of the above two networks respectively. The results are shown in Fig. 2. The recognition error rate represented by the ordinate is obtained by dividing the actual output by half the sum of the absolute values ​​of all elements in the output matrix obtained by the desired output, and dividing by 26; the dotted line represents training the network with a noise-free standard input signal. The error rate, the solid line represents the error rate of training the network with noise. It can be seen from the figure that when the noise-free training network recognizes the characters, when the character appears noise, the network recognition immediately has an error; when the noise average exceeds 0.02, the recognition error rate rises sharply, and the maximum error rate reaches 21.5. %. It can be seen that the noise-free training network identification has almost no anti-interference ability. The network trained with noise has a certain anti-interference ability, which is in the mean. Under the noise environment of ~0.06, it can be accurately identified; its maximum recognition error rate is about 6.6%, which is much smaller than the noise-free training network.
3.2BP network design and its recognition effect
This network design method is described in detail in the literature [lj. The network has 35 input nodes and 26 output nodes. The target error is 0.0001, and the input layer has a logarithmic S-type activation function in the range of (0, l) two layers of 109519/109519 network, and the hidden layer selects 10 neurons according to experience. Like the single-layer perceptron, the noisy training network and the noise-free training network are obtained by using the ideal input signal and the input training network with random noise. Since the noise input vector may cause the 1 or o output of the network to be incorrect or other values ​​appear, in order to make the network anti-interference ability, after the network training, the output is processed through a layer of competing network to make the network The output only has a bit t of 1 in the maximum value in this column, ensuring that the output is O at other locations. The network training uses the adaptive learning rate plus the additional momentum method to directly call the traingdx in the MATLAB toolbox. The performance of the network is tested under the same test conditions as the single layer perceptron. The result is shown in Figure 3. The dotted line represents the error rate of the noise-free training network, and the solid line represents the error rate of the noisy training network. It can be seen from the figure that both networks can be accurately identified in a noisy environment with a mean of o 0.12. In the noise environment between 0.12 and 0.15, the error rate of the noise-free training network is slightly lower than that of the noisy training network because the noise amplitude is relatively small and the character to be recognized is close to the ideal character. When the added noise average exceeds. At .15, the character to be recognized is no longer close to the ideal character under the action of noise, and the error rate of the noise-free training network rises sharply. At this time, the performance of the noise training network is superior.
3.3 Discrete, Hopfield Network Design and Its Recognition
At this time, the number of network input nodes is equal to the number of output neurons, and r=s=35, and the orthogonal weight design method is adopted. The function newhå©.m can be called directly in the MATLAB toolbox. It should be noted that since the function newhoP.m is called, it is necessary to input all of the signals. Write English papers into one. Such as letterA~[一一一11-111-11-111一11一l1111-11111-11一11111111111一l一l1111-11-1111]'. Designing a discrete Hopfield network for character recognition requires only a stable balance of network memory requirements, ie 26 English letters to be identified. Therefore, it is only necessary to train the network with an ideal input signal. For the trained network, we perform performance testing. Enter any letter into the network and add a mean value from it. The noise of ~0.5 randomly generates 100 input vectors, and observes the letter recognition error rate. The result is shown in Fig. 4. As can be seen from the figure, the network can accurately identify the noise environment with a mean value between 0 and 0.33. In the noise environment between 0.33 and 0.4, the recognition error rate is less than 1%. Under the noise environment of 0.4 or more, the network recognition error rate rises sharply, up to about 10%. It can be seen that the attraction domain of the network stability point is about 0.3~. Between .4. When the noise mean value is within the attraction domain, the network performs almost no error when character recognition, and when the noise mean exceeds the attraction domain, the network error rate rises sharply.
4 Conclusion
In this paper, three artificial neural networks are designed to identify 26 English letters. It can be seen that the three artificial neural networks can effectively perform character recognition, and the recognition speed is fast, the adaptive performance is good, and the resolution is high. It can be seen from Fig. 2 and Fig. 3 that the noisy training network of the single layer perceptron can be accurately identified in the noise environment with the mean value of O~0.06, and the BP network with noisy training can be in o~0.12. The fault-tolerant identification of the BP network is better than that of the single-layer perceptron. In addition, when the noise reaches 0.2, the recognition error rate of the noisy training network of the single-layer perceptron is 6.6%. The BP network with noise training has an error rate of 2.1%, so the BP network has stronger recognition ability than the single layer sensor. In addition, as can be seen from Fig. 2, Fig. 3 and Fig. 4, the Hopfield network has the highest recognition rate among the three networks. It has almost no error before the noise is 0.33, and the BP network is the second, and the perceptron is the worst.
Through design, application and performance comparison, we can get the single-layer perceptron network structure and algorithm are very simple, training time is short, but the recognition error rate is higher, and the fault tolerance is also poor. BP network structure and algorithm are slightly more complicated than single-layer perceptron structure, but their recognition rate and fault tolerance are better. The Hopefield network has the dual advantages of simple design and best fault tolerance. Therefore, we should choose artificial neural network to identify characters according to the characteristics of the network and the actual requirements.
AC (Alternating Current) Power cord is to transmit high voltage. It is used to drive machinery or home appliances. Since AC Power Cord is output of high voltage electric power, there is a risk of electric shock injury, therefore, All the AC power cord must comply with safety standard to produce. DC (direct Current) power cord is used to the applicance with lower voltage mostly, so safety requirement is less stringent.
Power Cable, battery cable, DC power cable, AC power cable, power cord
ETOP WIREHARNESS LIMITED , https://www.oemwireharness.com