You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 29, 2022. It is now read-only.
Is there a way to pass extra feature tokens along with the existing word tokens (training features from the source file) and feed it to the encoder RNN?
Say I have 2 more feature columns for the corresponding source vocabulary set( Feature1 here ), in addition to the source vocabulary set. For example, consider this below:
Feature1 Feature2 Feature3
word1 x a
word2 y b
word3 y c
.
.
Moreover, I believe this is glossed over in the seq2seq-pytorch tutorial(https://github.com/spro/practical-pytorch/blob/master/seq2seq-translation/seq2seq-translation.ipynb) as quoted below:
“When using a single RNN, there is a one-to-one relationship between inputs and outputs. We would quickly run into problems with different sequence orders and lengths that are common during translation…….With the seq2seq model, by encoding many inputs into one vector, and decoding from one vector into many outputs, we are freed from the constraints of sequence order and length. The encoded sequence is represented by a single vector, a single point in some N dimensional space of sequences. In an ideal case, this point can be considered the "meaning" of the sequence.”
Is this possible to implement in tensorflow seq2seq with few modifications?. If so, Would be great if anyone tells how to practically implement/get this done. Thanks in advance.
The text was updated successfully, but these errors were encountered:
@iamsiva11
If you want only add some features, you can warp every word's all features in this format:
" ".join( [word0|f1|f2, word1|f1|f2, ..., ])
and parse the space, and the "|" in _preprocess function, and you can get the words' feature.
If you want add some complicated features, you may want to change the plain text format to tfrecord format, but take a lots of time to reconstruct code.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Is there a way to pass extra feature tokens along with the existing word tokens (training features from the source file) and feed it to the encoder RNN?
Say I have 2 more feature columns for the corresponding source vocabulary set( Feature1 here ), in addition to the source vocabulary set. For example, consider this below:
Feature1 Feature2 Feature3
word1 x a
word2 y b
word3 y c
.
.
Moreover, I believe this is glossed over in the seq2seq-pytorch tutorial(https://github.com/spro/practical-pytorch/blob/master/seq2seq-translation/seq2seq-translation.ipynb) as quoted below:
“When using a single RNN, there is a one-to-one relationship between inputs and outputs. We would quickly run into problems with different sequence orders and lengths that are common during translation…….With the seq2seq model, by encoding many inputs into one vector, and decoding from one vector into many outputs, we are freed from the constraints of sequence order and length. The encoded sequence is represented by a single vector, a single point in some N dimensional space of sequences. In an ideal case, this point can be considered the "meaning" of the sequence.”
Is this possible to implement in tensorflow seq2seq with few modifications?. If so, Would be great if anyone tells how to practically implement/get this done. Thanks in advance.
The text was updated successfully, but these errors were encountered: