You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thank you very much for your contribution, I tried to run your example, but due to the gpu problem, the maximum can only use 512 batch_size, then the problem I found is that the result is not better than NFM, the loss drops very slowly. The example is as follows:
python Main.py --model_type kgat --alg_type bi --dataset last-fm --regs [1e-5,1e-5] --layer_size [64,32,16] --embed_size 64 --lr 0.001 --epoch 400 --verbose 1 --save_flag 1 --pretrain -1 --batch_size 512 --node_dropout [0.1] --mess_dropout [0.1,0.1,0.1] --use_att True --use_kge True
Are the parameters wrong, and is it affected by batch_size? In addition, the source code loss_type, n_memory and using_all_hops did not find the source, how can I use them?
The text was updated successfully, but these errors were encountered:
Thanks for your interest. My suggestion is to use matrix factorization (MF) embeddings or KGAT with only one layer to initiate the user and item embeddings in KGAT with three layers.
Actually, KGAT-1 will use much less memory than KGAT-3.
Hello, thank you very much for your contribution, I tried to run your example, but due to the gpu problem, the maximum can only use 512 batch_size, then the problem I found is that the result is not better than NFM, the loss drops very slowly. The example is as follows:
python Main.py --model_type kgat --alg_type bi --dataset last-fm --regs [1e-5,1e-5] --layer_size [64,32,16] --embed_size 64 --lr 0.001 --epoch 400 --verbose 1 --save_flag 1 --pretrain -1 --batch_size 512 --node_dropout [0.1] --mess_dropout [0.1,0.1,0.1] --use_att True --use_kge True
Are the parameters wrong, and is it affected by batch_size? In addition, the source code loss_type, n_memory and using_all_hops did not find the source, how can I use them?
The text was updated successfully, but these errors were encountered: