###
**Layer Activations**

The most
straight-forward visualization technique is to show the activations of the
network during the forward pass. For ReLU networks, the activations usually
start out looking relatively blobby and dense, but as the training progresses
the activations usually become more sparse and localized. One dangerous pitfall
that can be easily noticed with this visualization is that some activation maps
may be all zero for many different inputs, which can indicate

*dead*filters, and can be a symptom of high learning rates.###
**Convolutional/FC Filters**

The
second common strategy is to visualize the weights. These are usually most
interpretable on the first CONV layer which is looking directly at the raw
pixel data, but it is possible to also show the filter weights deeper in the
network. The weights are useful to visualize because well-trained networks
usually display nice and smooth filters without any noisy patterns. Noisy
patterns can be an indicator of a network that hasn't been trained for long
enough, or possibly a very low regularization strength that may have led to over
fitting.

###
**Back propagation **

Primary reason
we are interested in this problem is that in the specific case of Neural Networks,

*f*will correspond to the loss function (*L*) and the inputs*x*will consist of the training data and the neural network weights. Training data is given and fixed so it is a constant factor in the equation. So here we are left with two variables which are weights and biases of each layer. In back propagation Convolutional Neural Network compute the gradient at every layer according to the loss function at output and these calculated new weights are then updated to converse the network for the final solution.