Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Out of memory running pre-trained system even with no paralellization #14

Open
Trujillo94 opened this issue Jul 18, 2022 · 1 comment
Open

Comments

@Trujillo94
Copy link

I am running the pre-trained system to get the labels on my computer and it is getting out of memory. It has 16GB of RAM, 4GB of them are already in use, so it has 12GB exclusively for this purpose.

I saw the Known Issue and applied the steps mentioned in the README.md to run each process sequentially instead of in parallel in order to save memory, but anyways it gets out of memory.

# pool = mp.Pool(processes=cores)
# result = pool.map(get_labels, range(0,len(topic_list))
result=[]
for i in range(0,len(topic_list)):
    result.append(get_labels(i))

Does anyone know which are the minimum hardware requirements to make this work? I couldn't find them anywhere.
Is there any other way a part from the already mentioned to lower the memory use?

Thanks.

@KINJALMARU16
Copy link

Hi @Trujillo94 How did you run the pretrained model ?
which gensim version you used?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants