where --input indicate the command line arguments for taking input images from user
We don't use any model training for this project.
This is our input image
Our project folder contains Database folder which is contained different-different background images.
You need to pass input image. the input image compare all the images in our Dataset folder. which image is greater than 65 percent. it will return similar images using Histogram. also similar images save in our output folder. you can check it.
This is our output images
There are many way to solved similar images search. i have used histogram. upcoming blog we will implement powerful product recommendation system using Deep learning Algorithm for feature extraction and for feature (vector) compare we will use euclidean distance. and for product detection we will use Mask RCNN. this project we will use for deployment. also i will try to make front end for this So you can check it our Backend result in Front end.
Hi, in this blog we are implementing face detection with face identification using histogram and python.
First is input images and second is our output image which is fetch our database and return the name of person in this images. for label i have used some split function.
You can check script.
Note :- You need to save our database images with name of labels. you can check our Database folder.
Histogram is most common use in machine learning projects. we cant use this implementation on actual face recognition projects. i have tried to using histogram. it provide sufficient accuracy. but not excellent.
You can use it for learning purpose and how to use histogram in python. how to use haar cascade model for face detection. harsecade is not provide good accuracy. you can use MTCNN, YOLO FACE.
Upcoming blog we will implement face recognition project with good accuracy.
Now you can try with this project. this project also provides good accuracy for beginner learning machine learning.
In this project we are not using any third party api for face identification. we are using histogram.
Also we do not use any other python libraries. we are using opencv and some basic python libraries for implement this project.
I will providing codes and some testing images and harsecade model for face detection which is used to return cropped images from our input image and Database images
you need to clone this project and install some opencv-python
where --input indicate the command line arguments for taking input images from user
We don't use any model training for this project
Dataset folder contain the images of person. i have used 5 person in this project you can use more i have already try with 1000 classes. it will provide 70 accuracy.
but for this blog i have used only 5 classes.
Don't worry if you see the structure of project. so you definitely understood what happened here
After run Face.py script the you can see the output will display.
You need to pass the input images. which is you want to detect which person in your input image also i have add print statement you can check label name in your terminal.
Input image pass our program first this images pass our face detect method. and after this method. this method return the face cropped images by using harsecade opencv model. this process pass our both like input images and our Database folder.
After this cropped input image and Database images cropped pass our histogram calculation. this method will return some values. for both condition like input image and our Database folder.
After this input image calculation and Database image value passed our histogram matching method, this method returns the match values.
After this match values check.
This process we follow input image match our Dataset images one by one using histogram and return the values.
If values greater then 75 so it will identify person. or values is less than 75 so it will not detect.
now end of this project. upcoming blog we will implement next project.
After finished this step now you have to copy/paste your data_darknet folder in our clone project Folder.(which is already covered in my previous blog. you need to follow this blogs for dataset preparation)
If you using google colab so you need to upload this complete folder in google drive.
For Google Colab
First you need to open google colab then login/sign-up then mount the drive for copy our project folder In colab and also enable gpu option.
After this steps you need to split your dataset in train and test files.(train.txt, test.txt)
cmd :- cd yolo_training
cmd :- !python process.py data_darknet
After this step we are ready to train our model on custom dataset. Before this you need to download pre trained yolo model. I have already mentioned the path of this model in pretrained_model.txt txt file. just download these model files.
cmd :- ./darknet detector test /content/darknet/yolo_training/task.cfg /content/darknet/yolo_training/backup/yolo_training_30000.weights /content/darknet/yolo_training/test_images/1.jpg -thresh 0.1
You need to change the path accordingly and also insert the path of images (local machine or Google colab)
If you want to python script for this you need to visit my previous blog these blog i have provide flask api do you can easily integrate in your project click here
In this blog we will implement tensorflow object detection training using custom dataset.
Now we are start direct implementation without delay. if you want to study theory part so click here
I have used kangaroo Dataset .also you can try your data images. After that we need to set-up your annotation tools for Dataset Preparation
There are many types of annotation tools available but in this blog we will use bbox label tool. you Need to clone my github project for bbox label tool dataset preparation.
You don't need to any specific package for this just need Tkinter .
If you facing any problem to install tkinter so please use Sudo with install
Before run main.py you need to past your images folder in Images folder. one more point guys you Have to follow this file structure otherwise it will showing error. you just past your images folder in Images folder. and your images folder indicate the name of your categories. so please change Accordingly.
I have already setup all files structure according our annotation tools. you seen Label folder its store Our annotation values according our images folder image (like image1.txt )
Folder structure likes
After setup all files we need to run this script.
cmd :- main.py
After this you need to create bounding boxes.
After finish this step we need to convert our labels files which is contained txt files according our Images. We need to convert in XML format.
I have mentioned all the script in my github repository. run this script to convert our dataset .txt files In Dataset .xml format
cmd :- create_xmls_files.py
This script generate xml file which is stored in xmls folder and second is trivial.txt which is Contained the name of our images name without extension.
Short video for better understanding you can watch (Youtube)
After this steps you need to copy your images folder and xml folder, trivial.txt files in annotations Folder. I have already created this structure. just you need to replace your contained in annotations Folder. please make sure you need to follow annotations directory. one more point in annotations Folder of labels folder contained label_map.pbtxt this files. we need to change accordingly your Labels class.
The file structure look
After this steps we are ready to train our model on custom dataset using google colab. if you want to Train your model on local machine. you can do it just follow this steps.
Now come to the training. Just clone my repository
This training project contain all the important files. just replace your annotations folder in training Project annotations folder.
After this steps we are ready to train. just upload your training folder in Google drive for colab used.
If you want to train in your local machine you need to create virtual environment under the python3.
And install necessary packages. i have mentioned requirement.txt files in our training folder.
You can create virtual environment for this
cmd :- virtualenv --python=python3.6 myvenv
After this you have to activate the virtual environment
cmd :- source local/bin/myvenv
cmd :- pip install -r requirements.txt
This process for local machine.
If you want to train your model on google colab you need to upload your complete training folder in Google drive.
After this you need to open Google Colab. Just search google colab and click on first link.but Remember you need to login with your email id for used training uploaded folder on google colab.
After this steps you need to mount drive and gpu setup. in google colab already setup gpu you just Need to select runtime button then click change runtime type button then select GPU option.
After this steps you need to copy your training folder in google colab Just create tensorflow training Folder.
After run this script you will receive train.record and val.record in tensorflow training folder.
After this steps we need to run this script to train our model.
Before this we need to modified ssd.config files according our labels name, and number of training Steps.
line number 9 for number of classes (you need to change according your labels)
line number 164 for number of steps (for one class default 10000)
than run
cmd :- !python object_detection/train.py \
--logtostderr \
--train_dir=train \
--pipeline_config_path=ssd.config
After run this script. it take time to train our model. then we need to export your model frozen_inference_graph.pb format you just need to run this script to export our model in .pb format
After run this script you will receive this file infrozen_inference_graph.pb in output_inference_graph Folder. Before run this script we need infrozen_inference_graph.pb model files and label_map.pbtxt File (which is store in our annotations folder).
In this blog we will implement mask rcnn model for custom dataset. mask rcnn is a instance Segmentation. First we need dataset. dataset is more important part of artificial intelligence. Mask R-CNN, returns class name and bounding box coordinates for each object,object mask values.
Now come to the point. i have already covered instance segmentation data preparation blogs. first you Need to learned how to annotate images data for instance segmentation. these blog will content all the Details of instance segmentation data preparation. also you can use my sample kangaroo images dataset Its available on my these blog.these blog i have used labelme tools. just invest your 30 minutes on this Blogs and learn how to prepare dataset for mask rcnn Click here .
After finish dataset preparation steps you need to download my project folder on google drive. i have Mentioned all the important folder and python files etc in my project folder also include pretrained mask_rcnn_coco.h5 models.after downloading you need to copy/past your dataset folder in downloaded Project folder. after finished this steps we are ready to train mask rcnn model on custom dataset.
You must be download this folder.
We will used google colab for model training. also you can try on your local machine. just replace the Path of our files and folder i have mentioned all the details in python script.
After this step you need to upload your project on your google drive.
Then search google colab and mount the drive and also setup the GPU. its not a rocket science just click Mount drive button and sign up, and for GPU you need to click runtime then click change runtime type Then select GPU.
After this you need to install these given packages on google colab for GPU training.Just copy all Packages and past on colab. if you used your system so please install this packages under the python 3 Virtual environment.
%tensorflow_version 2.x
!pip uninstall -y tensorflow
!pip install tensorflow-gpu==1.14.0
!pip install Keras==2.2.4
!pip install mrcnn
After this steps you need to copy your project on google colab. just create mask_rcnn_train folder and run this script.
Now your drive project folder (which is uploaded) copied in your google colab mask_rcnn_train folder.
After this run
cmd :- !pwd
You seen this type of result
/content/
So you need to run this command
cmd :- ! cd mask_rcnn_train
After this we are ready to train our model on custom dataset
One more point if you train this model on your local system so you need to change the path of our logs Folder. these folder will store our train model .so please make sure you need to change.
Also you can change the epochs according to your labels, and also you need to change the class name And number of class name. in this blogs we are using one class name is kangaroo. you can insert your Class name according your dataset.
line number 58 for logs folder location change (these folder will store our train model)
line number 66 for class name change
line number 221 for epochs change
Also you can follow this google colab notebook. i have mentioned all the code and necessary command This is the Google colab notebook click here then modified the paths and follow this steps for training.
After run this script it take time. in logs folder received model files. we will use last train model files.
I have already mentioned the image test files. on your project folder you can use it. but you need to Change this lines according your class name model path
line number 23 for number of classes
linenumber 38 for path for new train model (logs folder .h5)
linenumber 37 for model directory folder (you can used any path for this )
I have already mentioned the input_test_images folder in our project folder. you can past your images in Input_test_images folder then replace the path of your images. if you used local system so please Change the path accordingly.
--i : - Path of input images
--o : - Path of output images
After run this script you will receive detected images in your output folder.
FIG (A Result)
FIG (B Result)
You will received this type of result. i know this images not generated mask because i have used only 50 Images. for demo purpose you can used more data and increase the epochs. it will show definitely mask And good result.
After this we will test our model on video. don't worry i have already mentioned script on folder you Can used it , also i have mentioned .mp4 files for kangaroo.you can test your model on videos according Your class name. just replaces the class name and number of class.
line number 29 for number of classes
line number 34 for path for new train model (logs folder .h5)
line number 33 for model directory folder (you can used any path for this )
In this blogs we will learn How to dataset annotations for instance segmentation and semantic Segmentation.
First we need to understand what is the semantic segmentation and instance segmentation.
In this image you can better understand.In semantic segmentation Every pixel in the image belongs to One a particular class.
And Different instances of the same class are segmented individually in instance segmentation.
Now come to the point mask rcnn is a instance segmentation model
And Deeplab is a semantic Segmentation model.
After annotations you can easily train your own mask rcnn, and deeplab model. dataset annotations is Very important part of machine learning.
In this blogs we will use Labelme Tools for annotations our dataset. you need to install this tools. don't Worry this is not a rocket science.
I recommend you use virtual environment for this without any installation error.
Create virtual environment using python 3.6
cmd :- virtualenv --python=python3.6 annotations
After this activate the virtualenv
cmd :- source annotations/bin/activate
after this you install labelme tools
cmd :- pip install labelme
one more package we need to install
cmd :- pip install pyqt5
Installation process is done now we are ready to prepare your dataset for model training.
cmd :- labelme
After hit this command this the display open like this.
After this you need to click Open Dir button to select your images folder for annotations. and now We are ready to dataset preparation one by one images. click create polygons button for polygons of our Folder images. after create polygons you need to insert the name of you labels class. then click save Button. you can select your store path of annotations files. i recommend you store your json annotations in your images folder which is generate our images according.
This process same for both instance segmentation and semantic segmentation.
After annotation we need to convert your annotations dataset in instance segmentation and semantic Segmentation.
Note : - Friends dataset preparation most important part of AI Model so please Make sure your Polygons will correct otherwise this polygons reflects your model.
Optional steps :- Also we need to remove negative images which is not contain any objects in your Images folder. i have already mentioned in my github repository. you can use this script for remove Negative images In our dataset folder. just add replace the path of your folder.
Note :- I have mentioned use.txt files in folder you can read this how to use this script. don't worry this Is not a rocket science. you just replace the path of your annotations dataset folder. (command line Arguments)
you have to change the label name in your labels.txt. please don't remove this labels name __ignored__
background_ otherwise your model will not train correctly.
One more point you need to pass the command line argument. just we need 3 paths 1 is dataset Annotations path and 2 is labels.txt files and 3 is output folder.
You need to run this script under the virtual environment which is created already
For mask rcnn training we need to run python script 2 times one for train and second for validation Folder. you can use 10 percent for validation and remaining for training.
after finish this process we need to create Annotations Folder. in annotations folder we need to create train and val folder in train and val folder we need to create images folder. and just copy/past our output images and json files (COCO) in images folder.
After this our Annotations Folder ready For mask rcnn custom dataset training.
Folder structure like
Annotations :-
(1) train :-
(I) images :-
(II) annotations.json
(2) val :-
(I) images
(II) annotations.json
Now come to the semantic segmentation for Deeplab custom training.
In next blog. i have covered all the details for mask rcnn custom dataset training please review this blog And learn how to train mask rcnn model for custom dataset these blog i have used google colab.
(2) Semantic segmentation for Deeplabs custom training
labelme also provide python script to convert Semantic segmentation. my folder contained this script Just you need to run this script under the virtual environment.
One more point you need to pass the command line argument. just we need 3 paths 1 is dataset Annotations path and 2 is labels.txt files and 3 is output folder.
Also you have to change the label name in your labels.txt. please don't remove this labels name __ignored__ background_ otherwise your model will not train correctly.
cmd : - cd semantic_segmentation_deeplabs
Note :- I have mentioned use.txt files in folder you can read this how to use this script. don't worry this Is not a rocket science. you just replace the path of your annotations dataset folder. (command line Arguments)
Generated output folder contained multiple folder. but we need 2 folder and one folder for deeplabs model training 1 is class_name.txt file and 2 is JPEGImages folder and 3 is SegmentationClassPNG Folder.
After this process You can copy/past these folder in Separate folder which is need in future blog for Deeplab model training.
In deeplabs model we don't need dataset split for now. In further we will create tf records this time we Need dataset split. this process we will cover in our new Blog :- (Deeplabs Custom Dataset Training)
This is the complete process of dataset preparations for instance segmentation and semantic segmentation.
This is the short video of our process you can watch this videos and follow our dataset preparation for mask rcnn and deeplab model training. (youtube)
In next blog. i have covered all the details for mask rcnn custom dataset training please review this blog And learn how to train mask rcnn model for custom dataset these blog i have used google colab.
In this blog we will cover touchless attendance face recognition with voice system for office, School, college, coaching etc attendance during this pandemic covid 19 situation.
You know very well this system is very useful.
Now coming days school college coaching, private sector office government sector office etc. we need This type system for attendance.
I have used in this system Python, Opencv, Tensorflow, Gtts, Mongodb node js.
Wait for some time. i have working on front-end for display attendance on web application. just 2-3 days more. I have been finished my work on back-end.
This project is very useful nowadays. everyone want to this project for touchless attendance with good Accuracy.
This is the short video of back-end. in next video i will show you complete web application for Touchless attendance.
How to extract mask values in Image and applying on white background using machine learning.
In this blog we are implementing How to extract mask values in Image and applying on white background using Pre-trained Deep Labs machine learning Model.
In this blog we are using Deeplabs pretrained model for extract mask values from images
DeepLab is one of the most promising techniques for semantic image segmentation.
If you want to learn more about deep labs so you can Click here
I will cover only implementation part. you can easily use my code for your real time project.
just replacing the path of model.
If you want to learn how to create annotations for deeplab training, and also how to train deeplab model for custom dataset. so i will covert in my next blog with details description.
In this blog Also i have provide my flask api script so you can easily integrate your project for semantic Image segmentation.
And first you need to download deeplabs pre-trained model click here. after download.. the download Folder contained 3 files but we need only one frozen_inference_graph.pb we need only these .pb files.
After this you have to clone my project on github.
And again one more time because my repository contains more projects so you select extract mask_from_images_using_ml folder
cmd :- cd extract_mask_from_images_using_ml
Project file,folder structure like
After this you have to copy/past your download model file which is name is frozen_inference_graph.pb Past in model folder (clone project)
Create virtual environment for this. I have used python 3.6
cmd :- virtualenv --python=python3.6 myvenv
After this you need to activate the virtual environment
cmd :- source local/bin/myvenv
cmd :- pip install -r requirements.txt
I have already collect testing images in input folder.if you want to try on your images so copy in the input folder .
Now we are ready for script execution. our extract_mask_from_images_using_ml folder contained two python script one is flask api, and second is without flask api. you should try both files.
If you want to test flask api script so we will need postman software. also you can design front-end for this and call your flask api. it will returns result on front-end (html page)
If you want to learn how to use postman api tools, how to download, login-signup or parameter setting so i will cover in future blog. if you need.
Because friends in AI industry all most people creating model api for showing there model result in web Application. without api you can not show your ml model result in front-end (html page).
So you need to learn how to create api for this. I have covered in my all blog post you can used it
Because i have provide flask api in my all blogs. So you can easily use these api in your real time Project. and I am not a expert. I am pass out in 2019 Batch with information Technology and Engineering. I am sharing my past 1 years experience in AI
Also you can try on my without_flask_api script for testing purpose.
And also you can try your images but it return this type of images only which is in coco dataset name list Because deeplabs pre-trained model train on coco dataset.
Now we will come to the flask api for this and implement this
I have already mentioned you need postman, or any other front-end page etc so you can call this api
First we need to run server file. and also you can run this files on server aws machine or etc for Deployment
cmd :- python with_flask_api.py --port 8080
(you can change port, the default port is 8080) you can try without port. I have mentioned in my code it will run by default port 8080
After run this script in your terminal.
Your terminal like
And after this you need to open your postman tool
Postman tools take address of your api, and api parameter
How to train Inception-v3 image classification model on custom Dataset
In this blog we are implementing Inception-v3 image classification model on custom Dataset
We were using Inception-v3 model which is already trained by google on 1000 classes but what if we want to do the same thing but with our own images. We are going to use transfer learning which will help us to retrain final layer of already trained Inception-v3 model with new categories from scratch.
Now first we need to clone my Projects on your local system. i have used CPU for this. you can also impalement on GPU.
I have trained model with 2 classes for demo purpose. you can impalement more for learning. in image Classification we don't need any types of annotations just need Images Categories.
I have Provide 2 categories Dataset for custom Dataset training. you can use this code for real time image classification problem just insert your images folder in Dataset folder then start training.
Create virtual environment for this. I have used python 3.6
cmd :- virtualenv --python=python3.6 myvenv
After this you have to activate the virtual environment
cmd :- source local/bin/myvenv
cmd :- pip install -r requirements.txt
The Folder structure like
I have already mentioned Dataset in my clone project. you don't need to download
Just clone my project. and if you want to train your images so you can replace images folder in
Dataset folder. but remember images folder name indicate the class name.
I have provide agriculture, and sports images
One more point you have to create logs folder. (it is used for store our final train model)
This run.sh file containing number of epochs, and path of your Dataset folder etc.
you can increase/ decrease epoch. according to your accuracy, number of classes.
you don't need to download pre-trained model for training. it automatically download.
cmd :- bash run.sh
If you run this files and show error like permission denied.
select the properties of run.sh files and set allow execution. then run
The program will start creating .txt files
like
After this it will start training and complete around 5000 steps
like
After taking some time your model will successful trained. and final train model store in logs folder.
like
Now we are testing our model on images
I have already mentioned python script. you can easily use but make sure you need to change the path of logs folder which is contained trained model and label files according to you system.
Also you can download my trained model with 2 categories for testing purpose Click here
Note :- Friends also you can create flask api for this. don't worry i have already created flask api for inceptionv3 model.and step by step follow this blogs. also you can download the code.In real time project we will needed api. so i have suggest you can follow this blogs Click here