Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions about batch_size and some parameters #8

Open
Bosen-Zhang opened this issue Aug 23, 2019 · 2 comments
Open

Some questions about batch_size and some parameters #8

Bosen-Zhang opened this issue Aug 23, 2019 · 2 comments

Comments

@Bosen-Zhang
Copy link

Hello, thank you very much for your contribution, I tried to run your example, but due to the gpu problem, the maximum can only use 512 batch_size, then the problem I found is that the result is not better than NFM, the loss drops very slowly. The example is as follows:
python Main.py --model_type kgat --alg_type bi --dataset last-fm --regs [1e-5,1e-5] --layer_size [64,32,16] --embed_size 64 --lr 0.001 --epoch 400 --verbose 1 --save_flag 1 --pretrain -1 --batch_size 512 --node_dropout [0.1] --mess_dropout [0.1,0.1,0.1] --use_att True --use_kge True

Are the parameters wrong, and is it affected by batch_size? In addition, the source code loss_type, n_memory and using_all_hops did not find the source, how can I use them?

@xiangwang1223
Copy link
Owner

Thanks for your interest. My suggestion is to use matrix factorization (MF) embeddings or KGAT with only one layer to initiate the user and item embeddings in KGAT with three layers.

Actually, KGAT-1 will use much less memory than KGAT-3.

@xiangwang1223
Copy link
Owner

Hi @Bosen-Zhang, I have uploaded the MF embeddings. Now you can rerun the model and check whether your results are consistent with ours. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants