Sunday, January 31, 2021

Similar Search using Histogram and python

Hi, in this blog we are going to implement basic image similarity search project using some basic python libraries.

This project is very useful for beginners.

In this project i am going to using  Histogram matching, and python opencv.

First you need to clone my project.

cmd :- git clone https://github.com/Manishsinghrajput98/similar_search.git

cmd :- cd simmilar_search_histogram

cmd :- python main.py --input input.jpg

where --input indicate the command line arguments for taking input images from user

We don't use any model training for this project.

This is our input image


Our project folder contains Database folder which is contained different-different background images.

You need to pass input image. the input image compare all the images in our Dataset folder. which image is greater than 65 percent. it will return similar images using Histogram. also similar images save in our output folder. you can check it.


This is our output images






There are many way to solved similar images search. i have used histogram. upcoming blog we will implement powerful product recommendation system using Deep learning Algorithm for feature extraction and for feature (vector) compare we will use euclidean distance. and for product detection we will use Mask RCNN. this project we will use for deployment. also i will try to make front end for this So you can check it our Backend result in Front end. 

If you have any doubt so please comment.

Thanks 




Face recognition using Histogram and Python

Hi, in this blog we are implementing face detection with face identification using histogram and python.


First is input images and second is our output image which is fetch our database and return the name of person in this images. for label i have used some split function.

You can check script.

Note :- You need to save our database images with name of labels. you can check our Database folder.

Histogram is most common use in machine learning projects. we cant use this implementation on actual face recognition projects. i have tried to using histogram. it provide sufficient accuracy. but not excellent.

You can use it for learning purpose and how to use histogram in python. how to use haar cascade model for face detection. harsecade is not provide good accuracy. you can use MTCNN, YOLO FACE.

Upcoming blog we will implement face recognition project with good accuracy.

Now you can try with this project. this project also provides good accuracy for beginner learning machine learning.

In this project we are not using any third party api for face identification. we are using histogram.  

Also we do not use any other python libraries. we are using opencv and some basic python libraries for implement this project.

I will providing codes and some testing images and harsecade model for face detection which is used to return cropped images from our input image and Database images

you need to clone this project and install some opencv-python 

cmd :- git clone https://github.com/Manishsinghrajput98/Face_recognition_histogram.git

after that

cmd :- cd Face_Histogram

after that

cmd :- python Face.py --input input.jpg 

where --input indicate the command line arguments for taking input images from user

We don't use any model training for this project

Dataset folder contain the images of person. i have used 5 person in this project you can use more i have already try with 1000 classes. it will provide 70 accuracy.

but for this blog i have used only 5 classes.

Don't worry if you see the structure of project. so you definitely understood what happened here 

After run Face.py script the you can see the output will display. 

You need to pass the input images. which is you want to detect which person in your input image also i have add print statement you can check label name in your terminal.

Input image pass our program first this images pass our face detect method. and after this method. this method return the face cropped images by using harsecade opencv model. this process pass our both like input images and our Database folder.

After this cropped input image and Database images cropped pass our histogram calculation. this method will return some values. for both condition like input image and our Database folder.

After this input image calculation and Database image value passed our histogram matching method, this method returns the match values.

After this match values check.

This process we follow input image match our Dataset images one by one using histogram and return the values.

If values greater then 75 so it will identify person. or values is less than 75 so it will not detect.

now end of this project. upcoming blog we will implement next project.

File structure and result 



If you have any doubt so please comment.

Thanks 







Friday, August 28, 2020

How to train Yolov3 model on custom dataset

In this blog we are implement How to train Yolov3 model on custom dataset.


In this i have used kangaroo images for demo purpose you can use your images. but before this you Need To prepare dataset for model training.

Don't worry i have already covered in my previous blog. you can follow these blogs , also you can Watch video click here.

In this blog i have used google colab.you can use your system if you have good hardware resources.

Now come to the point you need to clone my project.

cmd :- git clone https://github.com/Manishsinghrajput98/yolo_traning.git

cmd :- cd yolo_traning

After finished this step now you have to copy/paste your data_darknet folder in our clone project Folder.(which is already covered in my previous blog. you need to follow this blogs for dataset preparation)

If you using google colab so you need to upload this complete folder in google drive.

For Google Colab

First you need to open google colab then login/sign-up then mount the drive for copy our project folder In colab and also enable gpu option.

After this step we need to install darknet. 

cmd :- git clone https://github.com/pjreddie/darknet.git

cmd :- cd darknet

cmd :- !make (! operator only for google colab if you use local so you don't need to add this operator)

cmd :- !mkdir yolo_training

Official website URL

Note :- You need to follow these steps for both (if you using local machine or google colab). | >

Note :- You need to read this and follow accordingly click here  

After this you need to run this script for copy your drive project in colab. (this step is not for local machine)

Source code

from distutils.dir_util import copy_tree

copy_tree("/content/drive/My Drive/yolo_traning", "/content/yolo_training/")

After this steps you need to split your dataset in train and test files.(train.txt, test.txt)

cmd :- cd yolo_training

cmd :- !python process.py data_darknet

After this step we are ready to train our model on custom dataset. Before this you need to download pre trained yolo model. I have already mentioned the path of this model in pretrained_model.txt txt file. just download these model files.

My Colab Notebook (you can use this)

open the pretrained_model.txt file and copy the path of pretrained model

cmd :- !wget https://pjreddie.com/media/files/darknet53.conv.74

After finish this steps we are ready to train our model on custom dataset. just run one more command For training.

If you using local machine for training. you need to replace the paths accordingly.

cmd :- !./darknet/darknet detector train /content/darknet/yolo_training/task.data /content/darknet/yolo_training/task.cfg /content/darknet/yolo_training/darknet53.conv.74 -gpus 0

like 

It takes time to train our model. 

Now come to the test our model on random images

cmd :- ./darknet detector test /content/darknet/yolo_training/task.cfg /content/darknet/yolo_training/backup/yolo_training_30000.weights /content/darknet/yolo_training/test_images/1.jpg -thresh 0.1

You need to change the path accordingly and also insert the path of images (local machine or Google colab)

If you want to python script for this you need to visit my previous blog these blog i have provide flask api do you can easily integrate in your project click here 

If you have any doubt so please comment

Thanks

Wednesday, August 26, 2020

Tensorflow Object Detection Training using Custom Dataset

In this blog we will implement tensorflow object detection training using custom dataset.



Now we are start direct implementation without delay. if you want to study theory part so click here 

I have used kangaroo Dataset .also you can try your data images. After that we need to set-up your annotation tools for Dataset Preparation

There are many types of annotation tools available but in this blog we will use bbox label tool. you Need to clone my github project for bbox label tool dataset preparation.

cmd :- git clone https://github.com/Manishsinghrajput98/dataset_pre.git

cmd :- cd bbox_label_tool

You don't need to any specific package for this just  need Tkinter .

If you facing any problem to install tkinter so please use Sudo with install

Before run main.py you need to past your images folder in Images folder. one more point guys you Have to follow this file structure otherwise it will showing error. you just past your images folder in Images folder. and your images folder indicate the name of your categories. so please change Accordingly.

I have already setup all files structure according our annotation tools. you seen Label folder its store Our annotation values according our images folder image (like image1.txt )

Folder structure likes



After setup all files we need to run this script.

cmd :- main.py

After this you need to create bounding boxes.

After finish this step we  need to convert our labels files which is contained txt files according our Images. We need to convert in XML format.

I have mentioned all the script in my github repository. run this script to convert our dataset .txt files In Dataset .xml format

cmd :- create_xmls_files.py

This script generate xml file which is stored in xmls folder and second is trivial.txt which is Contained the name of our images name without extension.

Short video for better understanding you can watch (Youtube)


After this steps you need to copy your images folder and xml folder, trivial.txt files in annotations Folder. I have already created this structure. just you need to replace your contained in annotations Folder. please make sure you need to follow annotations directory. one more point in annotations Folder of labels folder contained label_map.pbtxt this files. we need to change accordingly your Labels class.

The file structure look



After this steps we are ready to train our model on custom dataset using google colab. if you want to Train your model on local machine. you can do it just follow this steps.

Now come to the training. Just clone my repository 

cmd : - https://github.com/Manishsinghrajput98/tensorflow_object_detect.git

cmd :- cd tensorflow_object_detect

This training project contain all the important files. just replace your annotations folder in training Project annotations folder.

After this steps we are ready to train. just upload your training folder in Google drive for colab used.
If you want to train in your local machine you need to create virtual environment under the python3.
And install necessary packages. i have mentioned requirement.txt files in our training folder.

You can create virtual environment for this 

cmd :- virtualenv --python=python3.6 myvenv

After this you have to activate the virtual environment

cmd :- source local/bin/myvenv

cmd :- pip install -r requirements.txt 

This process for local machine.

If you want to train your model on google colab you need to upload your complete training folder in Google drive.

After this you need to open Google Colab. Just search  google colab and click on first link.but Remember you need to login with your email id for used training uploaded folder on google colab.
After this steps you need to mount drive and gpu setup. in google colab already setup gpu you just Need to select runtime button then click change runtime type button then select GPU option.

After this steps you need to copy your training folder in google colab Just create tensorflow training Folder.

click here Colab Notebook

cmd : - !mkdir tensorflow_training 

Than run this python script to copy all the content in tensorflow training.

Source code : - 

from distutils.dir_util import copy_tree

copy_tree("/content/drive/My Drive/tensorflow_object_detection", "/content/tensorflow_training/")

After this we need to install these packages just copy and paste on google colab

%tensorflow_version 2.x
!pip uninstall -y tensorflow
!pip install tensorflow-gpu==1.14.0
!pip install Keras==2.2.4
!pip install mrcnn

After run this script we need to create tfrecords for training.

cmd :- !python object_detection/create_tf_record.py

After run this script you will receive train.record and val.record in tensorflow training folder.

After this steps we need to run this script to train our model.

Before this we need to modified ssd.config files according our labels name, and number of training Steps.

line number 9 for number of classes (you need to change according your labels)

line number 164 for number of steps (for one class default 10000)

than run 

cmd :- !python object_detection/train.py \
        --logtostderr \
        --train_dir=train \
        --pipeline_config_path=ssd.config

After run this script. it take time to train our model. then we need to export your model frozen_inference_graph.pb format you just need to run this script to export our model in .pb format

cmd :-  !python3 object_detection/export_inference_graph.py \
        --input_type image_tensor \
        --pipeline_config_path train/pipeline.config \
        --trained_checkpoint_prefix train/model.ckpt-100000 \
        --output_directory output_inference_graph

After run this script you will receive this file infrozen_inference_graph.pb in output_inference_graph Folder. Before run this script we need infrozen_inference_graph.pb model files and label_map.pbtxt File (which is store in our annotations folder).

cmd :- !python model_test.py --i /content/drive/My Drive/tensorflow_object_detection/test_image.jpg

--i :- path of the input images.  

After run this script you will receive output images in project folder.

like



Thanks For Read.

If you have any doubt so please comment.

Monday, August 24, 2020

How to train Mask RCNN model for custom dataset using google colab

In this blog we will implement mask rcnn model for custom dataset. mask rcnn is a instance Segmentation. First we need dataset. dataset is more important part of artificial intelligence. Mask R-CNN, returns class name and bounding box coordinates for each object,object mask values.


Now come to the point. i have already covered instance segmentation data preparation blogs. first you Need to learned how to annotate images data for instance segmentation. these blog will content all the Details of instance segmentation data preparation. also you can use my sample kangaroo images dataset Its available on my these blog.these blog i have used labelme tools. just invest your 30 minutes on this Blogs and learn how to prepare dataset for mask rcnn Click here .

After finish dataset preparation steps you need to download my project folder on google drive. i have Mentioned all the important folder and python files etc in my project folder also include pretrained mask_rcnn_coco.h5 models.after downloading you need to copy/past your dataset folder in downloaded Project folder. after finished this steps we are ready to train mask rcnn model on custom dataset.

You must be download this folder.

We will used google colab for model training. also you can try on your local machine. just replace the Path of our files and folder i have mentioned all the details in python script. 

After this step you need to upload your project on your google drive. 

Then search google colab and mount the drive and also setup the GPU. its not a rocket science just click Mount drive button and sign up, and for GPU you need to click runtime then click change runtime type Then select GPU.

After this you need to install these given packages on google colab for GPU training.Just copy all Packages and past on colab. if you used your system so please install this packages under the python 3 Virtual environment.

%tensorflow_version 2.x

!pip uninstall -y tensorflow

!pip install tensorflow-gpu==1.14.0

!pip install Keras==2.2.4

!pip install mrcnn

After this steps you need to copy your project on google colab. just create mask_rcnn_train folder and run this script.

cmd :- !mkdir mask_rcnn_train

Source code :

from distutils.dir_util import copy_tree

copy_tree("/content/drive/My Drive/mask_rcnn_custom_train", "/content/mask_rcnn_train/")

Now your drive project folder (which is uploaded) copied in your google colab mask_rcnn_train folder.

After this run

cmd :- !pwd

You seen this type of result 

 /content/

So you need to run this command 

cmd :- ! cd mask_rcnn_train

After this we are ready to train our model on custom dataset 

One more point if you train this model on your local system so you need to change the path of  our logs Folder. these folder will store our train model .so please make sure you need to change.

Also you can change the epochs according to your labels, and also you need to change the class name And number of class name. in this blogs we are using one class name is kangaroo. you can insert your Class name according your dataset.

line number 58 for logs folder location change (these folder will store our train model)

line number 66 for class name change 

line number 221 for epochs change 

Also you can follow this google colab notebook. i have mentioned all the code and necessary command This is the Google colab notebook click here then modified the paths and follow this steps for training.

Click here Colab Notebook

After run this python script on colab or if you try on local so please remove this ! operator 

cmd :- !python3 train.py train --dataset=/content/mask_rcnn_train/dataset --weights=coco


After run this script it take time. in logs folder received model files. we will use last train model files. 

I have already mentioned the image test files. on your project folder you can use it. but you need to Change this lines according your class name model path 

line number 23 for number of classes 

line number 38 for path for new train model (logs folder .h5)

line number 37 for model directory folder (you can used any path for this )

line number 40 for class name 

Then run this script 

cmd :- !python test_images.py --i /content/mask_rcnn_train/input_test_images/00054.jpg --o /content/mask_rcnn_train

I have already mentioned the input_test_images folder in our project folder. you can past your images in Input_test_images folder then replace the path of your images. if you used local system so please Change the path accordingly.

--i : - Path of input images

--o : - Path of output images

After run this script you will receive detected images in your output folder.

FIG (A Result)

FIG (B Result)

You will received this type of result. i know this images not generated mask because i have used only 50 Images. for demo purpose you can used more data and increase the epochs. it will show definitely mask And good result.

After this we will  test our model on video. don't worry i have already mentioned script on folder you Can used it , also i have mentioned .mp4 files for kangaroo.you can test your model on videos according Your class name. just replaces the class name and number of class.

line number 29 for number of classes 

line number 34 for path for new train model (logs folder .h5)

line number 33 for model directory folder (you can used any path for this )

line number 36 for class name 

Then run this script 

cmd :- !python test_videos.py --i /content/mask_rcnn_train/videos_test/kangaroo_videos_test.mp4

--i :- path of input videos

After run this script you will receive the output video output.avi.

Short videos (youtube)


If you need Flask api for this so please visit my blog How to create Flask Machine Learning API for Mask RCNN Detection model

If you have any doubt so please comment 

Thanks 

Sunday, August 23, 2020

How to dataset annotations for instance segmentation and semantic segmentation

In this blogs we will learn How to dataset annotations for instance segmentation and semantic Segmentation.

First we need to understand what is the semantic segmentation and instance segmentation.


In this image you can better understand.In semantic segmentation Every pixel in the image belongs to One a particular class. 

And Different instances of the same class are segmented individually in instance segmentation.

Now come to the point mask rcnn is a instance segmentation model 

And Deeplab is a semantic Segmentation model.

After annotations you can easily train your own mask rcnn, and deeplab model. dataset annotations is Very important part of machine learning.

In this blogs we will use Labelme Tools for annotations our dataset. you need to install this tools. don't Worry this is not a rocket science.

I recommend you use virtual environment for this without any installation error.

Create virtual environment using python 3.6

cmd :- virtualenv --python=python3.6 annotations

After this activate the virtualenv

cmd :- source annotations/bin/activate

after this you install labelme tools

cmd :- pip install labelme

one more package we need to install 

cmd :- pip install pyqt5

Installation process is done now we are ready to prepare your dataset for model training.

cmd :- labelme 

After hit this command this the display open like this.


After this you need to click Open Dir button to select your images folder for annotations. and now We are ready to dataset preparation one by one images. click create polygons button for polygons of our Folder images. after create polygons you need to insert the name of you labels class. then click save Button. you can select your store path of annotations files. i recommend you store your json annotations in your images folder which is generate our images according.

 
This process same for both instance segmentation and semantic segmentation.

After annotation we need to convert your annotations dataset in instance segmentation and semantic Segmentation.

Note : - Friends dataset preparation most important part of AI Model so please Make sure your Polygons will correct otherwise this polygons reflects your model.

Optional steps :- Also we need to remove negative images which is not contain any objects in your Images folder. i have already mentioned in my github repository. you can use this script for remove Negative images In our dataset folder. just add replace the path of your folder.  

Script :- delete_images_json_not.py   (you need to modified this script according your requirement)

(1) Instance segmentation For MASK RCNN custom training

First we will convert our dataset in instance segmentation For MASK RCNN custom training.

we need python script for this. labelme github repository provide this script. you can read about this.

we will follow simple steps for you. you just clone my github repository.

before this we need to install pycocotools

cmd :-  pip install pycocotools

cmd : - https://github.com/Manishsinghrajput98/dataset_pre.git

cmd :- cd dataset_pre

cmd :- instance_segmentation_mask_rcnn

Note :- I have mentioned use.txt files in folder you can read this how to use this script. don't worry this Is not a rocket science. you just replace the path of your annotations dataset folder. (command line Arguments)

you have to change the label name in your labels.txt. please don't remove this labels name __ignored__
background_ otherwise your model will not train correctly.

One more point you need to pass the command line argument. just we need 3 paths 1 is dataset Annotations path and 2 is labels.txt files and 3 is output folder.

You need to run this script under the virtual environment which is created already 

cmd :- python labelme2coco.py /home/rajput/Desktop/dataset_pre/instance_segmentation_mask_rcnn/kangaroo_images/ /home/rajput/Desktop/dataset_pre/instance_segmentation_mask_rcnn/kangaroo_train --labels /home/rajput/Desktop/dataset_pre/instance_segmentation_mask_rcnn/labels.txt

For mask rcnn training we need to run python script 2 times one for train and second for validation Folder. you can use 10 percent for validation and remaining for training.

after finish this process we need to create Annotations Folder. in annotations folder we need to create train and val folder in train and val folder we need to create images folder. and just copy/past our output images and json files (COCO) in images folder.

After this our Annotations Folder ready For mask rcnn custom dataset training. 

Folder structure like

Annotations :-
(1) train :-
    (I) images :- 
    (II) annotations.json 
(2) val :- 
     (I) images
    (II) annotations.json

Now come to the semantic segmentation for Deeplab custom training.

In next blog. i have covered all the details for mask rcnn custom dataset training please review this blog And learn how to train mask rcnn model for custom dataset these blog i have used google colab.
Click here this url

(2) Semantic segmentation for Deeplabs custom training

labelme also provide python script to convert Semantic segmentation. my folder contained this script Just you need to run this script under the virtual environment. 

One more point you need to pass the command line argument. just we need 3 paths 1 is dataset Annotations path and 2 is labels.txt files and 3 is output folder.

Also you have to change the label name in your labels.txt. please don't remove this labels name __ignored__ background_ otherwise your model will not train correctly.

cmd : - cd semantic_segmentation_deeplabs

Note :- I have mentioned use.txt files in folder you can read this how to use this script. don't worry this Is not a rocket science. you just replace the path of your annotations dataset folder. (command line Arguments)

cmd :- python labelme2voc.py /home/rajput/Desktop/dataset_pre/semantic_segmentation_deeplabs/kangaroo_images/ /home/rajput/Desktop/dataset_pre/semantic_segmentation_deeplabs/kangaroo_train --labels /home/rajput/Desktop/dataset_pre/semantic_segmentation_deeplabs/labels.txt

Generated output folder contained multiple folder. but we need 2 folder and one folder for deeplabs model training 1 is class_name.txt file and 2 is JPEGImages folder and 3 is SegmentationClassPNG Folder. 

After this process You can copy/past these folder in Separate folder which is need in future blog for Deeplab model training.

In deeplabs model we don't need dataset split for now. In further we will create tf records this time we Need dataset split. this process we will cover in our new Blog :- (Deeplabs Custom Dataset Training)

This is the complete process of dataset preparations for  instance segmentation and semantic segmentation.

This is the short video of our process you can watch this videos and follow our dataset preparation for mask rcnn and deeplab model training. (youtube)



In next blog. i have covered all the details for mask rcnn custom dataset training please review this blog And learn how to train mask rcnn model for custom dataset these blog i have used google colab.
Click here this url

Thanks For Read.

If you have any doubt so please comment.

Sunday, August 16, 2020

Touchless attendance face recognition with voice system for office, School, college, coaching etc attendance during this pandemic covid 19 situation

In this blog we will cover touchless attendance face recognition with voice system for office, School, college, coaching etc attendance during this pandemic covid 19 situation.

You know very well this system is very useful.

Now coming days school college coaching, private sector office government sector office etc. we need This type system for attendance.

I have used in this system Python, Opencv, Tensorflow, Gtts, Mongodb node js.

Wait for some time. i have working on front-end for display attendance on web application. just 2-3 days more. I have been finished my work on back-end.

This project is very useful nowadays. everyone want to this project for touchless attendance with good Accuracy.

This is the short video of back-end. in next video i will show you complete web application for Touchless attendance.





Wednesday, July 29, 2020

How to extract mask values in Image and applying on white background using machine learning

How to extract mask values in Image and applying on white background using machine learning.

In this blog we are implementing How to extract mask values in Image and applying on white background using Pre-trained Deep Labs machine learning Model.



In this blog we are using Deeplabs pretrained model for extract mask values from images 

DeepLab is one of the most promising techniques for semantic image segmentation.

If you want to learn more about deep labs so you can Click here

I will cover only implementation part. you can easily use my code for your real time project.
just replacing the path of model.

If you want to learn how to create annotations for deeplab training, and also how to train deeplab model for custom dataset. so i will covert in my next blog with details description.

In this blog Also i have provide my flask api script so you can easily integrate your project for semantic Image segmentation. 

And first you need to download deeplabs pre-trained model click here. after download.. the download Folder contained 3 files but we need only one frozen_inference_graph.pb we need only these .pb files.

After this you have to clone my project on github.

cmd :- https://github.com/Manishsinghrajput98/extract_mask_from_images_using_ml.git

cmd :- cd extract_mask_from_images_using_ml

And again one more time because my repository contains more projects so you select extract mask_from_images_using_ml folder 

cmd :- cd extract_mask_from_images_using_ml

Project file,folder structure like



After this you have to copy/past your download model file which is name is frozen_inference_graph.pb  Past in model folder (clone project)

Create virtual environment for this. I have used python 3.6  

cmd :- virtualenv --python=python3.6 myvenv

After this you need  to activate the virtual environment

cmd :- source local/bin/myvenv

cmd :- pip install -r requirements.txt

I have already collect testing images in input folder.if you want to try on your images so copy in the input folder .

Now we are ready for script execution. our extract_mask_from_images_using_ml folder contained two python script one is flask api, and second is without flask api. you should try both files.

If you want to test flask api script so we will need postman software. also you can design front-end for this and call your flask api. it will returns result on front-end (html page)

If you want to learn how to use postman api tools, how to download, login-signup or parameter setting so i will cover in future blog. if you need.

Because friends in AI industry all most people creating model api for showing there model result in web Application. without api you can not show your ml model result in front-end (html page).
So you need to learn how to create api for this. I have covered in my all blog post you can used it 
Because i have provide flask api in my all blogs. So you can easily use these api in your real time Project. and I am not a expert. I am pass out in  2019 Batch with information Technology and Engineering. I am sharing my past 1 years experience in AI

Also you can try on my without_flask_api script for testing purpose.

So now we are test in random images.

Also you can see my input image in input folder.



After run script 

cmd :- python without_flask_api.py --input input/Test1.jpg

your terminal like 



After this you received output image in output folder

like 


and try more images 

cmd :- python without_flask_api.py --input input/2.jpg

input image 


You received output image in your output folder 


And also you can try your images but it return this type of images only which is in coco dataset name list Because deeplabs pre-trained model train on coco dataset.

Now we will come to the flask api for this and implement this

I have already mentioned you need postman, or any other front-end page etc so you can call this api

First we need to run server file. and also you can run this files on server aws machine or etc for Deployment 

cmd :- python with_flask_api.py --port 8080 

(you can change port, the default port is 8080) you can try without port. I have mentioned in my code it will run by default port 8080

After run this script in your terminal.

Your terminal like 



And after this you need to open your postman tool

Postman tools take address of your api, and api parameter

Address:- http://localhost:8080/start

Parameter :- 

{
"image_path":"/home/rajput/Desktop/extract_mask_from_images_using_ml/input/6.jpg"
}

You must change the path according your image after this you need to hit api to click send button on postman tools



If you received this type of massage in your postman. your api call successful

Also you can check your terminal 

 
After this you open your output-flask-api folder. it will store output image



In this image one side is input image and one side is output image.

Similar you can try all images in our input image.


And server terminal like 



This is the complete Flask api for deploy in your ml model in any machine like Google cloud, AWS etc.

If you have any doubt so please comment 

Thanks 

Tuesday, July 28, 2020

How to train Inception-v3 image classification model on custom Dataset

How to train Inception-v3 image classification model on custom Dataset

In this blog we are implementing  Inception-v3 image classification model on custom Dataset


We were using Inception-v3 model which is already trained by google on 1000 classes but what if we want to do the same thing but with our own images. We are going to use transfer learning which will help us to retrain final layer of already trained Inception-v3 model with new categories from scratch.

Now first we need to clone my Projects on your local system. i have used CPU for this. you can also impalement on GPU.

I have trained model with 2 classes for demo purpose. you can impalement more for learning. in image Classification we don't need any types of annotations just need Images Categories.

I have Provide 2 categories Dataset for custom Dataset training. you can use this code for real time image classification problem just insert your images folder in Dataset folder then start training.

It return .db files and label files.

Now start 

cmd :- git clone https://github.com/Manishsinghrajput98/inceptionv3_training.git

cmd :- cd  inceptionv3_training

Create virtual environment for this. I have used python 3.6  

cmd :- virtualenv --python=python3.6 myvenv

After this you have to activate the virtual environment

cmd :- source local/bin/myvenv

cmd :- pip install -r requirements.txt

The Folder structure like


I have already mentioned Dataset in my clone project. you don't need to download
Just clone my project. and if you want to train your images so you can replace images folder in 
Dataset folder. but remember images folder name indicate the class name. 

I have provide agriculture, and sports images 

One more point you have to create logs folder. (it is used for store our final train model)

This run.sh file containing number of  epochs, and path of your Dataset folder etc.

you can increase/ decrease epoch. according to your accuracy, number of classes.

you don't need to download pre-trained model for training. it automatically download.

cmd :- bash run.sh

If you run this files and show error like permission denied.

select the properties of run.sh files and set allow execution. then run

The program will start creating .txt files

like 


After this it will start training and complete around 5000 steps 

like



After taking some time your model will successful trained. and final train model store in logs folder.

like



Now we are testing our model on images 

I have already mentioned python script. you can easily use but make sure you need to change the path of logs folder which is contained trained model and label files according to you system.

cmd :- python inception_image.py --input test1.jpg

like 



and try another categories

cmd :- python inception_image.py --input test2.jpg 

like



Also you can try on videos i have also mentioned code for this in our project folder 

cmd :- python inception_video.py --input test.mp4 --output result.mp4

Also you can download my trained model with 2 categories for testing purpose Click here

Note :- Friends also you can create flask api for this. don't worry i have already created flask api for inceptionv3 model.and step by step follow this blogs. also you can download the code.In real time project we will needed api. so i have suggest you can follow this blogs  Click here 

If you have any doubt so please comment 

Thanks