Deploy both tensor-bridge and tensorflow-serving so that you can use JSON to talk to your TensorFlow models.
You must provide a URL for a Tensorflow SavedModel saved as a .tar.gz
file.
E.g.:
https://s3.amazonaws.com/octo-public/wide_deep_model.tar.gz
If the root directory of your model is wide_deep_model
then you can use this command to zip it up:
tar -zcvf wide_deep_model.tar.gz wide_deep_model
The unzipped directory structure should look like this:
wide_deep_model
wide_deep_model/1
wide_deep_model/1/variables
wide_deep_model/1/saved_model.pb
wide_deep_model/1/variables/variables.data-00000-of-00001
wide_deep_model/1/variables/variables.index
git clone [email protected]:trevorscott/tf-bridge.git
cd tf-bridge
heroku create $appName
heroku buildpacks:add -i 1 https://github.com/heroku/heroku-buildpack-apt.git
heroku buildpacks:add -i 2 https://github.com/heroku/heroku-buildpack-python.git
heroku buildpacks:add -i 3 https://github.com/danp/heroku-buildpack-runit.git
heroku config:set TENSORFLOW_MODEL_URL=https://s3.amazonaws.com/<your-public-bucket>/<your-publicly-accessible-model>.tar.gz
git push heroku master
A pre-made publicly available model is provided here:
https://s3.amazonaws.com/octo-public/wide_deep_model.tar.gz
Some background information about the model can be found here.
Set the provided URL as a config var via the button deploy or set it manually:
heroku config:set TENSORFLOW_MODEL_URL=https://s3.amazonaws.com/octo-public/wide_deep_model.tar.gz -a $appName
If you want to test your server with the provided model, a client and test data have been provided in the wide-deep
directory. To run the client and test your server:
git clone [email protected]:heroku/tf-bridge.git
cd tf-bridge/wide-deep
pipenv --three
pipenv install
pipenv run python wide_deep_client.py https://$appName.herokuapp.com
If all goes well you should see:
--------------------------
--------------------------
Accuracy: 0.8
--------------------------
--------------------------
See SIEGE.md
for details
- The Apt buildpack loads
tensorflow-model-server
& deps - The Runit buildpack manages
tensorflow-serving
&tf-bridge
processes .profile.d
script loads models from theTENSORFLOW_MODEL_URL
config var- Models must be exported using the
SavedModelBuilder
module, which is outlined here