Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A bug #9

Open
11051911 opened this issue Apr 29, 2024 · 0 comments
Open

A bug #9

11051911 opened this issue Apr 29, 2024 · 0 comments

Comments

@11051911
Copy link

When I brought my own data into the experiment, it basically ran out, but the methods involving cross-entropy loss reported this error. May I ask the author what the problem may be? Have you encountered it before?
C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:106: block: [32,0,0], thread: [126,0,0] Assertion target_val >= zero && target_val <= onefailed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:106: block: [4255,0,0], thread: [62,0,0] Assertiontarget_val >= zero && target_val <= onefailed. File "E:\最新聚类论文代码\A-Unified-Framework-for-Deep-Attribute-Graph-Clustering-main\main.py", line 53, in <module> result = train(args, data, logger) File "E:\最新聚类论文代码\A-Unified-Framework-for-Deep-Attribute-Graph-Clustering-main\model\pretrain_gat_for_efrdgc\train.py", line 48, in train loss = F.binary_cross_entropy(A_pred.view(-1), adj_label.view(-1))

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant