How GNNs are defined in Pytorch Geometric #3254
tristen-tooming
started this conversation in
General
Replies: 2 comments
-
Ah, heterogeneous graphs are now included in the latest documentation. Labeling all the notes separately and using the code provided in the documentation probably solves the case: class HGT(torch.nn.Module):
def __init__(self, hidden_channels, out_channels, num_heads, num_layers):
super().__init__()
self.lin_dict = torch.nn.ModuleDict()
for node_type in data.node_types:
self.lin_dict[node_type] = Linear(-1, hidden_channels)
self.convs = torch.nn.ModuleList()
for _ in range(num_layers):
conv = HGTConv(hidden_channels, hidden_channels, data.metadata(),
num_heads, group='sum')
self.convs.append(conv)
self.lin = Linear(hidden_channels, out_channels)
def forward(self, x_dict, edge_index_dict):
for node_type, x in x_dict.items():
x_dict[node_type] = self.lin_dict[node_type](x).relu_()
for conv in self.convs:
x_dict = conv(x_dict, edge_index_dict)
return self.lin(x_dict['author']) |
Beta Was this translation helpful? Give feedback.
0 replies
-
GNNs share their weights across nodes and edges, so that they also generalize to new data. In your case, defining "edge types" across you do not wish to share weights is a good solution. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
For example lets initialise simple directed graph as:
So the Graph looks like this:
G = 10 -----> 25 ----> 15
Now we define simple GNN:
So if we input G we get an output:
10 -----> -2.64 -----> 4.2
The Question: Is there separate GNNs for each edge (As many GNN as there are edges) or does the Pytorch Geometric use the same GNN to generate new node embeddings.
More on with neighbourhood size of 1 it would look like this without self loops:
With separate:
10 -GNN_1-> output_1 -GNN_2-> output_2
With overall GNN:
10 -GNN-> output_1 -GNN-> output_2
We are solving a problem where each edge is its own function so we would need separate GNNs to learn function between the nodes.
So the data looks like this where F is function generating next nodes feature vector:
10 -F_1-> 25 -F_2-> 15
Beta Was this translation helpful? Give feedback.
All reactions