Skip to content

Attempted an implementation #31

@scottleith

Description

@scottleith

I really enjoyed your "Advanced dynamic seq2seq with TensorFlow" tutorial, and decided to try it out myself! I wanted to take a corpus of english quotes, and create an encoder-decoder that could reconstruct the quotes from the meaning vector (the hidden state).

I've run into an error in the softmax_entropy_with_logits:
InvalidArgumentError (see above for traceback): logits and labels must be same size: logits_size=[1000,27994] labels_size=[500,27994] ( sequences have 5 timesteps, batch size is 100, vocab size is 27994).

I've been looking over my code for hours now, but can't find the mistake. I know it's a long shot, but would you be willing to take a look to see where I've gone wrong?

The code is here, and the 'problem' might be around line 246:
https://github.com/scottleith/lstm/blob/master/Attempted%20encoder-decoder%20LSTM.py

The raw data can be downloaded here: https://github.com/alvations/Quotables/blob/master/author-quote.txt

I also apologize if this is an inappropriate place to ask - I wanted to contact you, but github doesn't make it easy!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions