3. Click to edit Master title style
3
About Organisation
3
• The Defence Research and Development Organisation (DRDO) is the premier agency under
the Ministry of Defence .DRDO is dedicatedly working towards enhancing self- reliance in
Defence Systems and undertakes design and Development leading to production of world
class weapons systems and equipment in accordance with the expressed needs.
• DRDO is working in various areas of military technology which includes aeronautics ,combat
vehicles ,electronic ,instrumentation engineering systems ,missiles ,materials ,advanced
computing ,simulation of life sciences.
• DRDO has many research and Development Labs spread across the country .The different R &
D labs are meant to work on different Projects ,One of the Lab of DRDO is SAG(Scientific
Anlaysis Group) which is located in Timarpur,Delhi.
• Scientific Analysis Group (SAG) ) was established in 1963.Its primary functions is to evolve new
scientific methods for design and analysis of communication systems.SAG is working in the
area of Cryptology and Information Security. SAG develops tools and technologies based on
contemporary mathematics , computer science, electronics and communication for
information security
4. Click to edit Master title style
4
Technology Used in Project
4
• Python is open source, interpreted, high level language
and provides great approach for object-oriented
programming. It is one of the best language used by
data scientist for various data science
projects/application.
• Python provide great functionality to deal with
mathematics, statistics and scientific function. It provides
great libraries to deals with data science application
• Version of Python used in project: Python 3.7.6
5. Click to edit Master title style
5
Features of Python
5
6. Click to edit Master title style
6
Python Libraries Used In This Project
6
• Pandas:-Pandas is the data manipulation and analysis library of python. It is widely
used in data science for data processing and visualization as well.
• Matplotlib :-Matplotlib is an amazing visualization library in Python for 2D plots of
arrays. It was introduced by John Hunter in the year 2002. Matplotlib consists of
several plots like line, bar, scatter, histogram etc.
• Seaborn:-Seaborn is a Python data visualization library based on matplotlib. It
provides a high-level interface for drawing attractive and informative statistical
graphics. Seaborn consists of plots like distribution plot, scatter plot, heatmap etc.
• ScikitLearn :--Sklearn is Python library for machine learning. Sklearn provides various
algorithms and functions that are used in machine learning. Sklearn is built on
NumPy, SciPy, and matplotlib.
• Keras:-Keras is a great library for building neural networks and modeling.
• TensorFlow:- TensorFlow is a popular Python framework for machine learning and
deep learning. . It helps in working with artificial neural networks that need to handle
multiple data sets
7. Click to edit Master title style
7
DEEP LEARNING – INTRO
7
BACK
• Deep learning is a subset of machine learning , which is essentially a
neural network with three or more layers. These neural networks attempt
to simulate the behavior of the human brain—albeit far from matching its
ability—allowing it to “learn” from large amounts of data. While a neural
network with a single layer can still make approximate predictions,
additional hidden layers can help to optimize and refine for accuracy.
• Deep learning architectures such as deep neural networks, recurrent
neural networks ,artificial neural networks and convolutional neural
networks have been applied to fields including computer vision,
machine vision, Speech recognisation and in a lot of other fields
8. Click to edit Master title style
8
Applications of Deep Learning
8
Chatbots
Disease Detection-
Healthcare
Face Recognition
Self Driven Cars
Data Analysis
Recomendation
Systems
9. Click to edit Master title style
9
Convolutional Neural Networks
A Convolutional Neural Network (ConvNet/CNN) is a Deep Learning algorithm which can
take in an input image, assign importance (learnable weights and biases) to various
aspects/objects in the image and be able to differentiate one from the other. The pre-
processing required in a ConvNet is much lower as compared to other classification
algorithms.
• Architecture:-A convolutional neural network consists of an input and an output layer, as well as
multiple hidden layers. The hidden layers of a CNN typically consist of a series of convolutional layers
that convolve with a multiplication or other dot product. The activation function is commonly a RELU
layer, and is subsequently followed by additional convolutions such as pooling layers, fully
connected layers and normalization layers, referred to as hidden layers because their inputs and
outputs are masked by the activation function and final convolution.
10. Click to edit Master title style
10
Convolutional Layer:- This layer is the first layer that is used to extract the various features
from the input images. The three elements that enter into the convolution operation are input
image, Feature detector(filter) and Feature map. The mathematical operation of convolution
is performed between the input image and a filter of a particular size MxM. By sliding the filter
over the input image, the dot product is taken between the filter and the parts of the input
image with respect to the size of the filter (MxM). The output is termed as the Feature map
which gives us information about the image such as the corners and edges. Later, this feature
map is fed to other layers to learn several other features of the input image.
The most important uses of this layer is to first reducing the size of the image and
second to extract the main features which are integral form the image. After we are
done with the convolution operation, we apply the rectifier function (Relu) to the
convolutional layer inorder to increase non – linearity in our images.
11. Click to edit Master title style
11
Pooling Layer :-In most cases, a Convolutional Layer is followed by a Pooling Layer. The
primary aim of this layer is to decrease the size of the convolved feature map to reduce the
computational costs. Depending upon method used, there are several types of Pooling
operations. In Max Pooling, the largest element is taken from feature map. Just like in the
convolution step, the creation of the pooled feature map also makes us dispose of
unnecessary information or features. In this case, we have lost roughly 75% of the original
information found in the feature map since for each 4 pixels in the feature map we ended up
with only the maximum value and got rid of the other 3.The other main aim for performing
pooling is to include “spatial variance capibility”. They are useful as small changes in the
location of the feature in the input detected by the convolutional layer will result in a pooled
feature map with the feature in the same location. This capability added by pooling is called
the model’s invariance to local translation.
12. Click to edit Master title style
12
Flattening:- After finishing the previous two steps, we're supposed to have a pooled feature map
by now. As the name of this step implies, we are literally going to flatten our pooled feature map
into a column like in the image below. The reason we do this is that we're going to need to insert
this data into an fully connected network later on. As you see in the image above, we have
multiple pooled feature maps from the previous step. What happens after the flattening step is
that you end up with a long vector of input data that you then pass through the artificial neural
network to have it processed further.
Fully Connected network:- The Fully Connected (FC) layer consists of the weights and
biases along with the neurons and is used to connect the neurons between two different
layers. These layers are usually placed before the output layer and form the last few layers of
a CNN Architecture.
13. Click to edit Master title style
13
The flattened output is fed to a feed-forward neural network and backpropagation applied
to every iteration of training. Over a series of epochs, the model is able to distinguish between
dominating and certain low-level features in images and classify them using the Softmax
Classification technique.
Dropout:- Usually, when all the features are connected to the FC layer, it can cause overfitting in
the training dataset. Overfitting occurs when a particular model works so well on the training data
causing a negative impact in the model’s performance when used on a new data.
To overcome this problem, a dropout layer is utilised wherein a few neurons are dropped from the
neural network during training process resulting in reduced size of the model. On passing a
dropout of 0.3, 30% of the nodes are dropped out randomly from the neural network.
Fully Connected
14. Click to edit Master title style
14
• Padding:- Padding is a term relevant to convolutional neural networks as it refers to the
amount of pixels added to an image when it is being processed by the kernel of a CNN.
For example, if the padding in a CNN is set to zero, then every pixel value that is added
will be of value zero.
• Weights:-Each neuron in a neural network computes an output value by applying a
specific function to the input values coming from the receptive field in the previous layer.
The function that is applied to the input values is determined by a vector of weights and a
bias (typically real numbers). Learning, in a neural network, progresses by making iterative
adjustments to these biases and weights.
Activation function:- Finally, one of the most important parameters of the CNN model is the
activation function. They are used to learn and approximate any kind of continuous and
complex relationship between variables of the network. In simple words, it decides which
information of the model should fire in the forward direction and which ones should not at
the end of the network. It adds non-linearity to the network. There are several commonly
used activation functions such as the ReLU, Softmax, tanH and the Sigmoid functions. Each
of these functions have a specific usage. For a binary classification CNN model, sigmoid
and softmax functions are preferred an for a multi-class classification, generally softmax us
used.
15. Click to edit Master title style
15
Project-Brain Tumour Detection
15
BACK
What is A Brain Tumour ?
• Brain tumor is an abnormal growth of cells that
changes the normal structure and behaviour of
brain. Tumors that may form in the skull can grow,
put pressure on the brain and adversely affect
body health.
• Brain tumors can be classified in several different
ways. For instance, one of the popular
classification types is to classify the brain tumors
as benign(non cancerous) and
malignant(cancerous) tumors.
16. Click to edit Master title style
16
Brain Tumour Detection Using Deep Learning
16
BACK
• Recent progress in the field of deep learning has
helped the health industry in Medical Imaging for
Medical Diagnostic of many diseases
• Accurate analysis of MRI Scans need to be
done to detect the brain tumor and it can be
achieved by using deep learning algorithms like
Convolutional Neural network
• Deep Learning approach along with Data
Augmentation and Image Processing is used to
detect Brain Tumour by categorising brain MRI
scan images into two parts, one with brain tumour
and one with no brain tumour
17. Click to edit Master title style
17
17
Problem Statement :- Every year, around 11,700 people are diagnosed
with a brain tumor. Proper treatment, planning, and accurate diagnostics
should be implemented to improve the life expectancy of the patients. The
best technique to detect brain tumors is Magnetic Resonance Imaging
(MRI) .Therefore with the help of Deep learning Algo Convolutional neural
networks and Image processing of The MRI images, we attempt to detect
Brain Tumours if present.
• Aim:- Therefore In this Project ,we will be Building Two Models one is
A basic CNN Model which is less Accurate and the other one is a
Complex CNN Model which is Built on VG16 Architecture and is Far
more Accurate than the Simple CNN Model. Both The Models will
Classify whether the MRI Scan Images have a tumour or not .
• DataSet Used :-In this Project we have used The Brain Tumor
Detection , a dataset provided by Kaggle.The dataset is
comprised of MRI scan images provided as a subset of photos
from a much larger dataset of 3 million manually annotated
photos.
MRI SCAN IMAGES
USED IN DATASET
18. Click to edit Master title style
18
Code Snippet-1)Simple CNN Model
18
Step1-Importing the Python Libraries
Step2-One Hot Encoding The Target Classes
19. Click to edit Master title style
19
19
Step3-Creating Three Lists:-1)Storing the Image Data in Numpy Array Form, 2)List to store Paths of All
the Images 3) result list for storing one hot encoded form of target Whether normal or tumour
Step4- Resizing All the Images with the standard size of (128,128) with 3 colour channels.
20. Click to edit Master title style
20
20
Step5-Spliting the Dataset into Training And Test Dataset
Step6-Building The Model with Convolutional Layer, Pooling Layer and Flattening Layer
21. Click to edit Master title style
21
Step7-Compiling Model And Printing Model Summary
Step 8-Training The Model
22. Click to edit Master title style
22
Step 9-Evaluating The Model And Printing The Confusion Matrix
23. Click to edit Master title style
23
23
Step 10-Calculating Final Accuracy of Our Model
Step 11-Plotting Losses of the Model on Dataset
24. Click to edit Master title style
24
Step 12-Testing Our Model and Making Our Predictions on A few Images from our Dataset itself.
As seen In the Final Step we are able to correctly detect and predict whether our MRI Scan
Images have a tumour or not.But Still as the Accuracy of the model is low ,We are going to build
our Second CNN Model which is Built Using VG16 CNN architecture and has A higher Accuracy
then the Previous Model which is 92.3%.The Code of the Second Model is included in the report .
25. Click to edit Master title style
25
Final Sequence Diagram
25
Normal Image Tumour Image