Skip to content
This repository has been archived by the owner on Dec 29, 2022. It is now read-only.

Memory usage when training on gpu #222

Open
mohgh opened this issue May 13, 2017 · 0 comments
Open

Memory usage when training on gpu #222

mohgh opened this issue May 13, 2017 · 0 comments

Comments

@mohgh
Copy link

mohgh commented May 13, 2017

I'm using this code for conversational modeling with Subtle corpus containing approximately 5.5 million message response pairs. When I train the model with GPU with medium model params, python process allocates nearly 26 gigabytes of ram at step 100000 and got killed by the kernel. Is it normal or there's memory leakage?
What data is lost when I restart the training from saved checkpoint?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant