Skip to content

vamshidhar199/MultiModalClassifier

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

69 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Table of contents[Options selected]

    1. RT Inference
    1. TF Lite
    1. Serving with REST APIs

Change made to the code.

Option chosen: TensorRT inference ,serving with REST APIs and TensorflowLite inference Procedure followed to setup the code:

  • 1.Created the virtual environment and ran the command as given in the repo.
  • 2.Installed the required packages as some where not installed while setup process.
  • 3.Had to create the output folder to store the models generated.
  • 4.Made changes to python code to accomodate the training of new model for MNIST data.
  • 5.Executed the code and generated the model and then used the model saved to convert to lite mode and madethe inference and then compared the accuracy and predictions made by the model.Below are some details regarding changes and results.
  • 6.Trained a cnn model and saved so that it can be used for api serving and then uploaded that model to the drive and used colab to execute the code present in apiserving.py file and then used the end point http://localhost:8501/v1/models/saved_model:predict to predict the image and th model redicted the image class exactly. More details in below sections.

Changes made at CNNsimplemodels.py, myTFInference.py, exportTFlite.py

1.RT Inference

    • Trained with Fashion MNIST model.
      • Created a new model (inside CNNsimplemodels.py ) with different set of parameters to train the MNIST data and trained the model.
      • Made necessary changes like the class name and other parameter for MNIST Model, and used ImageOps to convert the RGB to grey scale to make the immage array shape to (28,28,1). This creates the inference model, which is tested with the image of sneaker and the prediction came out to be good with more accuracy.
      • The output model has been stored in output/fashion folder which will furthur be used for converting to the lite model and use for predictions. alt alt

2.TF Lite

3.Serving with REST APIs

      • Serve a TensorFlow model with TensorFlow Serving.

Steps followed:

      • 1.Trained the classification model using the myTFDistributedTrainer.py, created a new model parameters in the CNNSimpleModels.py with name create_simplemodelTest2.
      • 2.This would create an output folder inside output/fashion/1. We use this model with our API to make predictions.
      • 4.class_names[np.argmax(predictions[0])], np.argmax(predictions[0]), class_names[test_labels[0]], test_labels[0]), using this code we get the image out of. the array for predictions.
      • 5.apiserving.py is the file whihc contains the necessary code, it has been executed in colab. The model which is saved inside outputs/fashion/1 folder, has been uploaded to my drive and executed the process of the serving by executing the code in colab the predictions are made as follows.
    • -alt
    • -alt

MultiModalClassifier

This is a project repo for multi-modal deep learning classifier with popular models from Tensorflow and Pytorch. The goal of these baseline models is to provide a template to build on and can be a starting point for any new ideas, applications. If you want to learn basics of ML and DL, please refer this repo: https://github.com/lkk688/DeepDataMiningLearning.

Package setup

Install this project in development mode

(venv38) MyRepo/MultiModalClassifier$ python setup.py develop

After the installation, the package "MultimodalClassifier==0.0.1" is installed in your virtual environment. You can check the import

>>> import TFClassifier
>>> import TFClassifier.Datasetutil
>>> import TFClassifier.Datasetutil.Visutil

If you went to uninstall the package, perform the following step

(venv38) lkk@cmpeengr276-All-Series:~/Developer/MyRepo/MultiModalClassifier$ python setup.py develop --uninstall

Code organization

Tensorflow Lite

  • Tensorflow lite guide link
  • exportTFlite file exports model to TFlite format.
    • testtfliteexport function exports the float format TFlite model
    • tflitequanexport function exports the TFlite model with post-training quantization, the model size can be reduced by image
    • The converted quantized model won't be compatible with integer only devices (such as 8-bit microcontrollers) and accelerators (such as the Coral Edge TPU) because the input and output still remain float in order to have the same interface as the original float only model.
  • To ensure compatibility with integer only devices (such as 8-bit microcontrollers) and accelerators (such as the Coral Edge TPU), we can enforce full integer quantization for all ops including the input and output, add the following code into function tflitequanintexport
converter_int8.inference_input_type = tf.int8  # or tf.uint8
converter_int8.inference_output_type = tf.int8  # or tf.uint8
  • The check of the floating model during inference will show false
floating_model = input_details[0]['dtype'] == np.float32
  • When preparing the image data for the int8 model, we need to conver the uint8 (0-255) image data to int8 (-128-127) via loadimageint function

TensorRT inference

Check this Colab (require SJSU google account) link to learn TensorRT inference for Tensorflow models. Check these links for TensorRT inference for Pytorch models:

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 98.0%
  • Shell 2.0%