Replies: 1 comment
-
You can achieve this via aggr = SumAggregation()
aggr(x[mask], index[mask]) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello,
I am trying to build a forward pass in a GNN model which takes the aggregation (e.g. mean, sum) of several specific nodes in the graph to pass to a fully connected layer after the graph-specific layers.
For example, graph
g1
has node indexes[5, 7, 10]
(stored as a 1-hot tensormask
in theData
object), which means that themean
orsum
pooling should only aggregate these three nodes.Graph
g2
has node indexes[]
(all zeros) which means that the aggregation should return the zero vector with same shape asg2.x.shape[1]
i.e. feature vector of just zeros.Is there a way to elegantly incorporate this
mask
tensor?I have looked at the code in the
torch_geometric.nn.aggr
module and as far as I can tell, there is no in-built way to do this. I am currently using a class I have made that modifiesAggregation
-- happy to make a pull request if this would be useful?Thanks for any input in advance!
Beta Was this translation helpful? Give feedback.
All reactions