offers data science lesson videos made simple!

Sign up or log in to Magoosh Data Science.

What is a neural network?

Various different tasks like classification, clustering, pattern recognition, etc. are performed on the computer by carrying out some biologically inspired simulations known as Artificial Neural Networks on the system itself. Any network of artificial neurons inspired biologically and then configured to perform specific tasks is known as an Artificial Neural Network. Neural Networks are a crucial aspect of Machine Learning.
Continue reading to learn more about these special networks.

Artificial Neural Network and Biological Neural Network

Neural networks are similar to the human brain in many ways, two of which are:

  • Knowledge is acquired by means of learning in a neutral network.
  • This knowledge is collected in the inter-neuron connection strengths within the neural network and are known as Synaptic weights.

Any biological neural network contains dendrites which correspond to the weighted inputs on the basis of their synaptic inter-connections present in the artificial neural network.

Corresponding to the artificial neuron in the artificial neural network, we have the cell body consisting of both the threshold and summation units too.

Axons transfer the output similar to the output unit when talking of artificial neural networks. Hence, we use the working of the basic biological neurons to model ANN.

Neural Networks Architecture

A great number of artificial neurons known as units are arranged in a series of layers in a typical neural network. The different layers composed in an artificial neural network:

  • Input layer — The units of artificial neurons which acquire input from the outside world are present here. The network will recognize about, learn or else process these units.
  • Output layer — Units present here give the information about how a task has been learned by it.
  • Hidden layer — Consists of units in between the input and output layers.  Main work of this layer is to convert the entered input into an output such that it can be used in many ways.

All neural networks are entirely interconnected implying that every hidden neuron is wholly joined to every neuron available in the previous input layer and also to the upcoming output layer.

How does a Neural Network work?

Artificial neural networks are depicted as weighted directed graphs where directed edges present with weights are viewed as links between neuron outputs and inputs and artificial neurons correspond to nodes.

Inputs are collected by artificial neural networks from the external world by means of patterns and images present in the vector form. Mathematical representation for these inputs is x(n) for n number of inputs.

After this, individual inputs are multiplied by their analogous weights. Here, weights showcase the information acquired by the neural network to solve any problem. Weight is used to determine the strength of the interconnection between individual neurons in a neural network.

Inside the computing unit all these weighted inputs are summed up. When 0 is obtained as the weighted sum, we add bias to the output to make it non-zero or to increase the system responses. The input and weight of a bias is always equal to ‘1’.

Sum of all the weighted inputs is any numerical value lying between 0 and infinity. We set up a threshold value so as to restrain our response and making it arrive at our desired value. For achieving this, the sum is also passed through an activation function.

A set of transfer functions employed to obtain desired output is known as activation function. It can be linear as well as non-linear.
Some commonly used activation functions are sigmoidal (linear), binary and tan hyperbolic sigmoidal non-linear functions.

Sigmoidal hyperbolic functions – They possess an ‘S’ shaped curve. Tan hyperbolic function is specifically used to make approximations for the output from net input.
Defined as: f(x) = [1/1+exp(-𝝈x)] where, 𝝈 represents steepness.

Binary – In binary functions, there are only two outputs, 0 and 1. We need to set up a threshold value here. When the net weighted input entered is larger than 1, output is assumed as 1 or otherwise 0.

How does a Neural Network learn?

Learning in a neural network occurs when the weights inside are revised after necessary changes.

For instance – Let’s take inputs in pattern form for two different types of patterns: I & O and let b-bias and y represent the desired outputs.
We wish to classify all the input patterns into pattern ‘I’ or ‘O’.

Following are the steps performed:

  1. 9 inputs from x1 — x9 linked with bias b (i.e. input with weight value 1) is entered in the network for the initial pattern.
  2. At the very start, weights are initialized to 0.
  3. They are then renewed for individual neurons using the formula:
    Δ wi = xi y for i = 1 to 9 (Hebb’s Rule)
  4. Finally, new weights are calculated using the formulae:
    wi(new) = wi(old) + Δwi
    Wi(new) = [111–11–1 1111]
  5. Next, we input second pattern to the network but we don’t initialize the weights to zero. Initial weights used in this case are the final weights we obtained after presenting the first pattern.
  6. Process 1-4 is performed likewise for second inputs.
  7. The new weights now are Wi(new) = [0 0 0 -2 -2 -2 000]

Hence, we conclude that these weights represent the learning capability of our network in classifying input patterns accurately.

Different types of “Learning” in Neural Networks

Unsupervised Learning 

Here, the input data is employed for training the networks when the output is known. Network classifies the input data and weight in the input data is adjusted accordingly by feature extraction.

Supervised Learning 

In this type of learning, we input training data to the network and the expected output of weights is configured until the desired value is yielded.

Online Learning 


In case of online learning, threshold and regulation of the weights is done after presenting training sample to the network.

Offline Learning


The adjustment of the weight vector and threshold is done only once all the training set has been presented to the network. Also known as batch learning.

Reinforcement Learning


In such type of learning, output result is unknown, but feedback is given by the network on whether the output is correct or not. It is classified as semi-supervised learning.

Here the value of the output is unknown, but the network provides the feedback whether the output is right or wrong. It is semi-supervised learning.

Learning Algorithms for Neural Networks

Gradient Descent

When working with supervised training models, this is the simplest training algorithm that can be used. Cases where the actual output differs from the expected/target output, we calculate the difference or error. Gradient descent algorithm modifies weights of the network in such a manner that the error gets minimized.

Back Propagation

Back propagation is basically an expansion of gradient based delta learning rule. In this case, after calculating the deviation between the targeted and desired output, we traverse the output backwards to the input layer from the output layer via the hidden layer. This process is mostly utilized for multilayer neural networks.

Different type of data sets involved

Training set: It is a set of instances employed for learning important parameters of the networks like weights. One Epoch consists of one complete training cycle on the training set.

Validation set: Class of examples used to adjust the parameters of the network like architecture. For instance, the choice of number of hidden units in a neural network.

Test set: A set of examples used specifically to analyse the performance (generalization) of an entirely precise network or for applying in the prediction of outputs from known inputs.

Popular Neural Networks

Perceptron — It is a neural network with two input units, one output unit and no hidden layers available. Also known as ‘Single layer perceptron’.

Multilayer Perceptron — More than one hidden layer of neurons is used in these networks, contrary to single layer perceptron. Also known as ‘Deep feedforward neural networks’.

Radial Basis Function Network
 — Such networks resemble feed forward neural networks in all aspects other than that the radial basis function works as activation function for these neurons.

Recurrent Neural Network — In this type of neural network the hidden layer present has inter-connections. Memory is possessed by recurrent neural networks. At any point we observe, the hidden layer neurons have a previous activation value as well as acquire activation from the lower layer.

Hopfield Network —  This is a completely interconnected network of neurons where every neuron has connections with every other neuron in the network. A value of neurons is set to a desired pattern in order to train the network with input patterns, after which the weight is computed and also the individual weights cannot be changed. Once the network has been trained for one or more patterns, it will converge only to the learned patterns. Such a network is divergent from other neural networks.

Comments are closed.


Magoosh blog comment policy: To create the best experience for our readers, we will only approve comments that are relevant to the article, general enough to be helpful to other students, concise, and well-written! 😄 Due to the high volume of comments across all of our blogs, we cannot promise that all comments will receive responses from our instructors.

We highly encourage students to help each other out and respond to other students' comments if you can!

If you are a Premium Magoosh student and would like more personalized service from our instructors, you can use the Help tab on the Magoosh dashboard. Thanks!