Skip to content
This repository has been archived by the owner on Dec 29, 2022. It is now read-only.

Serving using ExportStrategy #4

Open
dennybritz opened this issue Mar 3, 2017 · 11 comments
Open

Serving using ExportStrategy #4

dennybritz opened this issue Mar 3, 2017 · 11 comments
Labels

Comments

@dennybritz
Copy link
Contributor

Should figure out how to export models for serving, I think Tensorflow does provide something like an ExportStrategy that can be passed to the estimator and it will occasionally export the model.

@gidim
Copy link

gidim commented Apr 2, 2017

It might be easier to add support for using feed_dict as an alternative for InputPipeline?
Many TF models use that paradigm rather than reading the input data directly from the graph.
I tried doing this myself but the current architecture is dependent on tf.slim.Dataset which cannot be used for online inference.

Usually you would declare two places holders, one for features and one for targets and then
feed them as the model inputs. Currently the model in basic_seq2seq expects labels and features as a dictionary. I could not find the structure for that dictionary but i suspect it's coming from Dataset

@skyw
Copy link

skyw commented Apr 3, 2017

I think the standard way is freeze the model, then use feed_dict to feed input and get the output tensor.

@gidim
Copy link

gidim commented Apr 3, 2017

@skyw that's correct but since the models in this repo use Readers you first have to declare a new input tensor to the model (Placeholder).

@xueyouluo
Copy link

Any updates of this feature? I agree with @gidim , currently the model depend of tf.Slim.Dataset which is hard to be used for online inference.

@amirj
Copy link
Contributor

amirj commented Apr 25, 2017

I think there is two options:

  1. Load the model, add an input placeholder to the graph, remove the input pipeline from the graph and freeze it. As mentioned by @skyw

  2. [better option] leverage TFserving component to plug a model contains input pipeline with queues to a web server.

I think, the serving feature is very important feature for this project referred in other issues (#114).

@gidim
Copy link

gidim commented Apr 25, 2017

@amirj does tfserving supports option two? If so could link to the documentation/code ? I couldn't find anything about it.

@amirj
Copy link
Contributor

amirj commented Apr 25, 2017

Input pipelines are the standard way to feeding TF models. So, I think, its indeed possible. But the current documentation in TF serving module is not clear.

@gidim
Copy link

gidim commented Apr 25, 2017

@amirj - the standard way when performing training. for inference most models use feed_dict.
Usually it's easy to rebuild the graph with two placeholders but the code here is a a little complex and has quite a few dependencies which makes it harder.

@amirj
Copy link
Contributor

amirj commented Apr 25, 2017

It's strange if TF serving is not compatible with input queues.
It's not a good design to leverage different feeding in the training and inferencing. For example, one application of ExportStrategy is online inference during the training phase #114 . Rebuilding the graph in this situation is not good.

@sathyarr
Copy link

@amirj @gidim

do you think the approach followed here to serve the graph is recommended?

@sathyarr
Copy link

Should figure out how to export models for serving, I think Tensorflow does provide something like an ExportStrategy that can be passed to the estimator and it will occasionally export the model.

@dennybritz
I have managed to export through export_strategies in Experiment.
But, facing problem with preprocessing and embedding.
Kindly have a look here and here

Are things performed in _preprocess exported to Graph?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

6 participants