
Artificial Neural Networks are particularly valuable in domains where conventional computers may struggle.
Different types of artificial neural networks are employed based on specific parameters and mathematical operations to achieve desired results. Let's explore some of the essential types of Neural Networks used in Machine Learning.
Modular Neural Networks (MNNs) are independent networks working together to produce results. Each network handles specific sub-tasks, generating unique inputs. Unlike traditional neural networks, there is no direct interaction between these modules.
MNNs break down complex problems into smaller components, reducing computational complexity and enhancing computation speed. Their popularity is rapidly growing in the field of Artificial Intelligence.
A Feedforward Neural Network is the simplest form of an Artificial Neural Network, where information flows in one direction. It may contain hidden layers, and data enter through input nodes and exit through output nodes.
This neural network uses a classifying activation function. Unlike other neural networks, Feedforward networks do not involve backpropagation, allowing only front-propagation of information. They find applications in speech recognition, and computer vision and handle noisy data effectively.
A Radial Basis Function (RBF) Neural Network consists of two layers considering the distance between a center and a point. The inner layer features are combined with the Radial Basis Function. The output from this layer is used to compute the output in the next iteration. RBF has applications in Power Restoration Systems, restoring power reliably and quickly after a blackout.
Kohonen Self Organizing Neural Network uses vector input to a discrete map. The training data creates an organization's map with one or two dimensions. The weight of the neurons changes based on the value. During training, the neuron's location remains constant. A winning neuron, closest to the point, is chosen, and other neurons move towards it. This network is used for data pattern recognition, medical analysis, and clustering.
Recurrent Neural Network (RNN) feedbacks the output of a layer back to the input, acting as a memory cell. RNNs retain information from previous time steps, allowing predictions and error correction for improved outcomes. They find applications in converting text to speech, and RNNs can learn from supervised learning without explicit teaching signals.
Convolutional Neural Networks (ConvNets) use learnable biases and weights for neurons. They excel in image and signal processing, particularly in computer vision, replacing traditional methods like OpenCV. ConvNets process images in parts and classify them into categories, detecting edges based on pixel value changes. They have high accuracy in image classification and are widely used in computer vision for weather prediction and agriculture.
Long Short-Term Memory (LSTM) networks, developed by Schmidhuber and Hochreiter in 1997, are designed to remember information for extended periods in memory cells. LSTMs store previous values in memory cells and can forget them through "forget gates."
New information is added via "input gates" and passed to the next hidden state through "output gates." LSTMs have applications in composing music, complex sequence learning, and generating writings similar to Shakespeare.
Recommended Course