Performance Analysis of Conjugate descent learning Rule of Feed Forward Neural Networks for Pattern Classification
Pages : 723-725Download PDF
Conventional Backpropagation learning algorithm is frequently used for multilayer feed forward neural networks for pattern mapping. This learning rule uses first derivative of instantaneous local square error with respect to the current weight vector in the weight space. Therefore for every different presented pattern, the network exhibits the different local error and the weights modify in order to minimize the current local error. Therefore it frequently traps in the local minimum. The determination of the optimal weights is possible only when the global error is supposed to minimize. In this paper, we are providing the conjugate descent method to obtain the optimal weight vector. The conjugate descent method determines the second derivative of the error surface with respect to weight vector in the weight space. The proposed method indicates that the performance of feed forward neural network improves for pattern classification of handwritten English words.
Keywords: Conjugate descent, Feed forward neural networks, back propagation learning, gradient descent method, Pattern Classification.