# Difference between revisions of "Training Inception Model"

(Created page with "==Training your custom inception model== Follow this tensorflow tutorial to retrain a new inception model. https://www.tensorflow.org/tutorials/image_retraining Once all is...") |
|||

Line 1: | Line 1: | ||

==Training your custom inception model== | ==Training your custom inception model== | ||

− | Follow this tensorflow tutorial to retrain a new inception model. | + | Follow [https://www.tensorflow.org/tutorials/image_retraining this tensorflow tutorial] to retrain a new inception model. |

− | + | You can use the flower data from the tutorial, or you can create your own training data by replacing the data folder structures with your own. If you follows the tutorial for retraining, you should now have two files: | |

− | |||

− | |||

<code>/tmp/output_graph.pb</code> and <code>/tmp/output_labels.txt</code> | <code>/tmp/output_graph.pb</code> and <code>/tmp/output_labels.txt</code> | ||

==Optimize the graph for inference== | ==Optimize the graph for inference== | ||

+ | We would like to optimized the inception graph for inference. To do that, we first build the optimize_for_inference module as follows: | ||

+ | |||

+ | <pre> | ||

+ | bazel build tensorflow/python/tools:optimize_for_inference | ||

+ | </pre> | ||

+ | |||

+ | Now we optimized our graph | ||

+ | |||

+ | <pre> | ||

+ | bazel-bin/tensorflow/python/tools/optimize_for_inference \ | ||

+ | --input=/tmp/output_graph.pb \ | ||

+ | --output=/tmp/optimized_graph.pb \ | ||

+ | --input_names=Mul \ | ||

+ | --output_names=final_result | ||

+ | </pre> | ||

+ | |||

+ | An inference optimized graph <code>optimized_graph.pb</code> will be generated. We can use it along with the <code>output_lablels.txt</code> file to recognize flowers. | ||

− | + | == Deploying the model == | |

+ | Emgu.TF v1.3 includes an InceptionObjectRecognition demo project. We can modify the project to use our custom trained model. | ||

− | < | + | We can either include the trained model with our application, or, in our case, upload the trained model to internet for the app to download it, to reduce the application size. We have uploaded our two trained model to github, under the url: <pre> |

− | + | https://github.com/emgucv/models/raw/master/inception_flower_retrain/ </pre>. | |

− | </ |

## Revision as of 21:20, 31 July 2017

## Training your custom inception model

Follow this tensorflow tutorial to retrain a new inception model.

You can use the flower data from the tutorial, or you can create your own training data by replacing the data folder structures with your own. If you follows the tutorial for retraining, you should now have two files:
`/tmp/output_graph.pb`

and `/tmp/output_labels.txt`

## Optimize the graph for inference

We would like to optimized the inception graph for inference. To do that, we first build the optimize_for_inference module as follows:

bazel build tensorflow/python/tools:optimize_for_inference

Now we optimized our graph

bazel-bin/tensorflow/python/tools/optimize_for_inference \ --input=/tmp/output_graph.pb \ --output=/tmp/optimized_graph.pb \ --input_names=Mul \ --output_names=final_result

An inference optimized graph `optimized_graph.pb`

will be generated. We can use it along with the `output_lablels.txt`

file to recognize flowers.

## Deploying the model

Emgu.TF v1.3 includes an InceptionObjectRecognition demo project. We can modify the project to use our custom trained model.

We can either include the trained model with our application, or, in our case, upload the trained model to internet for the app to download it, to reduce the application size. We have uploaded our two trained model to github, under the url:

https://github.com/emgucv/models/raw/master/inception_flower_retrain/

.