Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about pretrain embedding #3

Open
WDdeBWT opened this issue Jun 16, 2020 · 5 comments
Open

Question about pretrain embedding #3

WDdeBWT opened this issue Jun 16, 2020 · 5 comments

Comments

@WDdeBWT
Copy link

WDdeBWT commented Jun 16, 2020

您好,我想问一下代码里面用到的pretrain embedding是使用TransR训练多轮得到的吗?

@WDdeBWT
Copy link
Author

WDdeBWT commented Jun 16, 2020

我在代码里没有看到关于训练并保存pretrain embedding的代码,想问一下这一块的实现,如果能够有配套的代码就更好了。感谢

@LunaBlack
Copy link
Owner

我用的是paper作者在他的github repo中提供的pretrain embedding:https://github.com/xiangwang1223/knowledge_graph_attention_network

我也没有看到他这部分相关的code,建议你在那个repo下提个issue问问。

PS:不用pretrain的效果貌似差很多。

@WDdeBWT
Copy link
Author

WDdeBWT commented Jun 16, 2020

我用的是paper作者在他的github repo中提供的pretrain embedding:https://github.com/xiangwang1223/knowledge_graph_attention_network

我也没有看到他这部分相关的code,建议你在那个repo下提个issue问问。

PS:不用pretrain的效果貌似差很多。

好的感谢。看您这部分代码写的非常详细,我一只以为是作者团队放出的Pytorch版本。祝您工作&研究顺利

@Genius-pig
Copy link

@Keep0828
Copy link

我看原作者的解释是:
In terms of the pretrain step, you can run the BPRMF model and store the MF embeddings under the 'Model/pretrain/'.

Hope it can be helpful.
xiangwang1223/knowledge_graph_attention_network#28

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants