Deep learning is a new name for an approach to AI called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department.
Neural networks were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who became co-directors of the new MIT Artificial Intelligence Laboratory in 1970.
Neural networks are a means of doing machine learning, in which a computer learns to perform specific tasks by analysing training examples. Usually, these examples have been hand-labeled in advance. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels.
Modelled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today’s neural nets are organised into layers of nodes, and they’re “feed-forward,” meaning that data moves through them in only one direction. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data.
Architecture and main types of neural networks
A typical neural network contains a large number of artificial neurons called units arranged in a series of layers.

- Input layer contains units (artificial neurons) which receive input from the outside world on which network will learn, recognise about or otherwise process.
- Output layer contains units that respond to the information about how it learned a task.
- Hidden layers are situated between input and output layers. Their task is to transform the input into something that output unit can use in some way.

- Perceptron has two input units and one output unit with no hidden layers, and is also called single layer perceptron.
- Radial Basis Function Network are similar to the feed-forward neural network except radial basis function is used as activation function of these neurons.
- Multilayer Perceptron networks use more than one hidden layer of neurons. These are also known as deep feed-forward neural networks.
- Recurrent Neural Network’s (RNN) hidden layer neurons have self-connections and thus possess memory. LSTM is a type of RNN.
- Hopfield Network is a fully interconnected network of neurons in which each neuron is connected to every other neuron. The network is trained with input pattern by setting a value of neurons to the desired pattern. Then its weights are computed. The weights are not changed. Once trained for one or more patterns, the network will converge to the learned patterns.
- Boltzmann Machine Network are similar to Hopfield network except some neurons are for input, while others are hidden. The weights are initialized randomly and learn through back-propagation algorithm.
- Convolutional Neural Network (CNN) derives its name from the “convolution” operator. The primary purpose of Convolution in case is to extract features from an input image/video. Convolution preserves the spatial relationship between pixels by learning about image/video features using small squares of input data.
Of these, let’s have a very brief review of CNNs and RNNs, as these are the most commonly used.
CNN
- CNNs are ideal for image and video processing.
- CNN takes a fixed size input and generate fixed-size outputs.
- Use CNNs to break a component (image/video) into subcomponents (lines, curves, etc.).
- CNN is a type of feed-forward artificial neural network – variation of multilayer perceptrons, which are designed to use minimal amounts of preprocessing.
- CNNs use connectivity pattern between its neurons as inspired by the organization of the animal visual cortex, whose neurons are arranged in such a way that they respond to overlapping regions tiling the visual field.
- CNN looks for the same patterns on all the different subfields of the image/video.
RNN
- RNNs are ideal for text and speech analysis.
- RNN can handle arbitrary input/output lengths.
- Use RNNs to create combinations of subcomponents (image captioning, text generation, language translation, etc.)
- RNN, unlike feedforward neural networks, can use its internal memory to process arbitrary sequences of inputs.
- RNNs use time-series information, i.e. what is last done will impact what done next.
- RNN, in the simplest case, feed hidden layers from the previous step as an additional input into the next step and while it builds up memory in this process, it is not looking for the same patterns.
A type of RNN are LSTM and GRU. The key difference between GRU and LSTM is that a GRU has two gates (reset and update) whereas an LSTM has three gates (input, output and forget). GRU is similar to LSTM in that both utilise gating information to solve vanishing gradient problem. GRU’s performance is on par with LSTM, but computationally more efficient.
- GRUs train faster and perform better than LSTMs on less training data if used for language modelling.
- GRUs are simpler and easier to modify, for example adding new gates in case of additional input to the network.
- In theory, LSTMs remember longer sequences than GRUs and outperform them in tasks requiring modelling long-distance relations.
- GRUs expose complete memory, unlike LSTM
- It’s recommended to train both GRU and LSTM and see which is better.
Deep learning frameworks
There are several frameworks that provide advanced AI/ML capabilities. How do you determine which framework is best for you?
The below figure summarises most of the popular open source deep network repositories. The ranking is based on the number of stars awarded by developers in GitHub (as of May 2017).

Google’s TensorFlow is a library developed at Google Brain. TensorFlow supports a broad set of capabilities such as image, handwriting and speech recognition, forecasting and natural language processing (NLP). Its programming interfaces includes Python and C++ and alpha releases of Java, GO, R, and Haskell API will soon be supported.
Caffe is the brainchild of Yangqing Jia who leads engineering for Facebook AI. Caffe is the first mainstream industry-grade deep learning toolkit, started in late 2013. Due to its excellent convolutional model, it is one of the most popular toolkits within the computer vision community. Speed makes Caffe perfect for research experiments and commercial deployment. However, it does not support fine granularity network layers like those found in TensorFlow and Theano. Caffe can process over 60M images per day with a single Nvidia K40 GPU. It’s cross-platform and supports C++, Matlab and Python programming interfaces and has a large user community that contributes to their own repository known as “Model Zoo.” AlexNet and GoogleNet are two popular user-made networks available to the community.
Caffe 2 was unveiled in April 2017 and is focused on being modular and excelling at mobile and at large scale deployments. Like TensorFlow, Caffe 2 will support ARM architecture using the C++ Eigen library and continue offering strong support for vision-related problems, also adding in RNN and LSTM networks for NLP, handwriting recognition, and time series forecasting.
MXNet is a fully featured, programmable and scalable deep learning framework, which offers the ability to both mix programming models (imperative and declarative) and code in Python, C++, R, Scala, Julia, Matlab and JavaScript. MXNet supports CNN and RNN, including LTSM networks and provides excellent capabilities for imaging, handwriting and speech recognition, forecasting and NLP. It’s considered the world’s best image classifier, and supports GAN simulations. This model is used in Nash equilibrium to perform experimental economics methods. Amazon supports MXNet, planning to use it in existing and upcoming services whereas Apple is rumorred to be also using it.
Theano architecture lacks the elegance of TensorFlow, but provides capabilities like symbolic API supports looping control, so-called scan, which makes implementing RNNs easy and efficient. Theano supports many types of convolutions for hand writing and image classification including medical images. Theano uses 3D convolution/pooling for video classification. It can process natural language processing tasks, including language understanding, translation, and generation. Theano supports GAN.