Wednesday, August 26, 2020

Tensorflow Object Detection Training using Custom Dataset

In this blog we will implement tensorflow object detection training using custom dataset.



Now we are start direct implementation without delay. if you want to study theory part so click here 

I have used kangaroo Dataset .also you can try your data images. After that we need to set-up your annotation tools for Dataset Preparation

There are many types of annotation tools available but in this blog we will use bbox label tool. you Need to clone my github project for bbox label tool dataset preparation.

cmd :- git clone https://github.com/Manishsinghrajput98/dataset_pre.git

cmd :- cd bbox_label_tool

You don't need to any specific package for this just  need Tkinter .

If you facing any problem to install tkinter so please use Sudo with install

Before run main.py you need to past your images folder in Images folder. one more point guys you Have to follow this file structure otherwise it will showing error. you just past your images folder in Images folder. and your images folder indicate the name of your categories. so please change Accordingly.

I have already setup all files structure according our annotation tools. you seen Label folder its store Our annotation values according our images folder image (like image1.txt )

Folder structure likes



After setup all files we need to run this script.

cmd :- main.py

After this you need to create bounding boxes.

After finish this step we  need to convert our labels files which is contained txt files according our Images. We need to convert in XML format.

I have mentioned all the script in my github repository. run this script to convert our dataset .txt files In Dataset .xml format

cmd :- create_xmls_files.py

This script generate xml file which is stored in xmls folder and second is trivial.txt which is Contained the name of our images name without extension.

Short video for better understanding you can watch (Youtube)


After this steps you need to copy your images folder and xml folder, trivial.txt files in annotations Folder. I have already created this structure. just you need to replace your contained in annotations Folder. please make sure you need to follow annotations directory. one more point in annotations Folder of labels folder contained label_map.pbtxt this files. we need to change accordingly your Labels class.

The file structure look



After this steps we are ready to train our model on custom dataset using google colab. if you want to Train your model on local machine. you can do it just follow this steps.

Now come to the training. Just clone my repository 

cmd : - https://github.com/Manishsinghrajput98/tensorflow_object_detect.git

cmd :- cd tensorflow_object_detect

This training project contain all the important files. just replace your annotations folder in training Project annotations folder.

After this steps we are ready to train. just upload your training folder in Google drive for colab used.
If you want to train in your local machine you need to create virtual environment under the python3.
And install necessary packages. i have mentioned requirement.txt files in our training folder.

You can create virtual environment for this 

cmd :- virtualenv --python=python3.6 myvenv

After this you have to activate the virtual environment

cmd :- source local/bin/myvenv

cmd :- pip install -r requirements.txt 

This process for local machine.

If you want to train your model on google colab you need to upload your complete training folder in Google drive.

After this you need to open Google Colab. Just search  google colab and click on first link.but Remember you need to login with your email id for used training uploaded folder on google colab.
After this steps you need to mount drive and gpu setup. in google colab already setup gpu you just Need to select runtime button then click change runtime type button then select GPU option.

After this steps you need to copy your training folder in google colab Just create tensorflow training Folder.

click here Colab Notebook

cmd : - !mkdir tensorflow_training 

Than run this python script to copy all the content in tensorflow training.

Source code : - 

from distutils.dir_util import copy_tree

copy_tree("/content/drive/My Drive/tensorflow_object_detection", "/content/tensorflow_training/")

After this we need to install these packages just copy and paste on google colab

%tensorflow_version 2.x
!pip uninstall -y tensorflow
!pip install tensorflow-gpu==1.14.0
!pip install Keras==2.2.4
!pip install mrcnn

After run this script we need to create tfrecords for training.

cmd :- !python object_detection/create_tf_record.py

After run this script you will receive train.record and val.record in tensorflow training folder.

After this steps we need to run this script to train our model.

Before this we need to modified ssd.config files according our labels name, and number of training Steps.

line number 9 for number of classes (you need to change according your labels)

line number 164 for number of steps (for one class default 10000)

than run 

cmd :- !python object_detection/train.py \
        --logtostderr \
        --train_dir=train \
        --pipeline_config_path=ssd.config

After run this script. it take time to train our model. then we need to export your model frozen_inference_graph.pb format you just need to run this script to export our model in .pb format

cmd :-  !python3 object_detection/export_inference_graph.py \
        --input_type image_tensor \
        --pipeline_config_path train/pipeline.config \
        --trained_checkpoint_prefix train/model.ckpt-100000 \
        --output_directory output_inference_graph

After run this script you will receive this file infrozen_inference_graph.pb in output_inference_graph Folder. Before run this script we need infrozen_inference_graph.pb model files and label_map.pbtxt File (which is store in our annotations folder).

cmd :- !python model_test.py --i /content/drive/My Drive/tensorflow_object_detection/test_image.jpg

--i :- path of the input images.  

After run this script you will receive output images in project folder.

like



Thanks For Read.

If you have any doubt so please comment.

No comments:

Post a Comment

If you have any doubts. Please let me know