-
Notifications
You must be signed in to change notification settings - Fork 1.3k
[Feature request] online inference #114
Comments
That definitely is an important feature. I think it goes together with #4 - Tensorflow Serving seems to be the recommend way to implement this. |
This feature is very interesting. Something like this project: online demo will be excellent during the training. |
Is it possible to get an input sentence in the source language, build the corresponding tensor for it, pass it to the trained model and finally get the output tensor? something similar to this. |
@amirj I was able to put together a naive solution, probably sufficient for a simple web demo. It works by passing the string via a feed_dict and using another InferenceTask hook. The same issue was brought up in #195. |
@noname01 It's great. Thank you for sharing. |
@noname01 Thanks you for sharing too. |
Thank you very much for this open source contribution! This is very helpful to my research!
The current implementation of inference focuses on batch-mode: read a file and print out the results.
Besides the above-mentioned batch-mode scenario, there is another scenario, which I believe, would would benefit many users. Once I trained a machine translation model, I would like to provide a demo translation service so that it can provide top k translation results for an input sentence. This would be very easy to intuitively demonstrate the trained mode.
The text was updated successfully, but these errors were encountered: