Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hardware Used for training #17

Open
TheRed002 opened this issue Apr 28, 2017 · 3 comments
Open

Hardware Used for training #17

TheRed002 opened this issue Apr 28, 2017 · 3 comments

Comments

@TheRed002
Copy link

TheRed002 commented Apr 28, 2017

I recently came across posenet and tried to experiment with it. I modified the original architecture of posenet and now my input layer size is 1x100x224x224x3 where 1 is the batch size and 100 are the frames. Actually I have implemented lstm version of posenet. Now I have tried to run training on my own data, first on gtx750 and then on quadro k1200 4 GB gpu with tensorflow backend for keras(i am using posenet,written in keras) and also tried theano backend but everytime I get "out of memory" error. I want to ask what hardware specs your computer had on which you ran your training?
About Data: I have around 6000 png images for training

@ghost
Copy link

ghost commented Aug 1, 2017

I think the authors mentioned the GPU they used in the paper as NVidia Titan Black. By the way, so can change the c++ code in caffe/caffe-posenet and print out the required memory and other information to understand your problem in a clear way.

@DRAhmadFaraz
Copy link

@TheRed002 sorry for bothering you, Well I also get this issue.. "out of memory" as shown.
have you solved this problem.?

Screenshot from 2019-04-27 20-44-59

@zhuoruiyang How to configure our device according to it.? I have GTX 940 4GB Nvidia GPU....

@ghost
Copy link

ghost commented Apr 27, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants