Memory efficient GATConv #2364
HongtaoYang
started this conversation in
Show and tell
Replies: 1 comment 6 replies
-
Yes, this works. Good job :) The downside of this approach is that we need to manually loop over each head. |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I've been using GAT on very dense graphs where number of edges (E) is very close to N^2 (N is number of nodes). The issue is that during the propagation process,
x_i
andx_j
got materialised as tensors of shape[E, feature_dim]
. This takes a lot of memory. I asked a related question #2356I tried to work around this issue using
SparseTensor
and implementsmessage_and_aggregate
method. Here is the modified GATConv code:Note that I still have to materialize the attention as tensor of shape
[E, num_heads]
because attentions are different for each edge. But node features don't need to be expanded for each edge.Any thoughts on this? Thanks!
Beta Was this translation helpful? Give feedback.
All reactions