2323 " ## Introduction\n " ,
2424 " \n " ,
2525 " [Graph neural networks](https://en.wikipedia.org/wiki/Graph_neural_network)\n " ,
26- " is the prefered neural network architecture for processing data structured as\n " ,
26+ " is the preferred neural network architecture for processing data structured as\n " ,
2727 " graphs (for example, social networks or molecule structures), yielding\n " ,
2828 " better results than fully-connected networks or convolutional networks.\n " ,
2929 " \n " ,
5151 },
5252 {
5353 "cell_type" : " code" ,
54- "execution_count" : 0 ,
54+ "execution_count" : null ,
5555 "metadata" : {
5656 "colab_type" : " code"
5757 },
8989 },
9090 {
9191 "cell_type" : " code" ,
92- "execution_count" : 0 ,
92+ "execution_count" : null ,
9393 "metadata" : {
9494 "colab_type" : " code"
9595 },
142142 },
143143 {
144144 "cell_type" : " code" ,
145- "execution_count" : 0 ,
145+ "execution_count" : null ,
146146 "metadata" : {
147147 "colab_type" : " code"
148148 },
167167 },
168168 {
169169 "cell_type" : " code" ,
170- "execution_count" : 0 ,
170+ "execution_count" : null ,
171171 "metadata" : {
172172 "colab_type" : " code"
173173 },
204204 " aggregated information of *N*-hops (where *N* is decided by the number of layers of the\n " ,
205205 " GAT). Importantly, in contrast to the\n " ,
206206 " [graph convolutional network](https://arxiv.org/abs/1609.02907) (GCN)\n " ,
207- " the GAT makes use of attention machanisms \n " ,
207+ " the GAT makes use of attention mechanisms \n " ,
208208 " to aggregate information from neighboring nodes (or *source nodes*). In other words, instead of simply\n " ,
209209 " averaging/summing node states from source nodes (*source papers*) to the target node (*target papers*),\n " ,
210210 " GAT first applies normalized attention scores to each source node state and then sums."
239239 },
240240 {
241241 "cell_type" : " code" ,
242- "execution_count" : 0 ,
242+ "execution_count" : null ,
243243 "metadata" : {
244244 "colab_type" : " code"
245245 },
336336 " else:\n " ,
337337 " outputs = tf.reduce_mean(tf.stack(outputs, axis=-1), axis=-1)\n " ,
338338 " # Activate and return node states\n " ,
339- " return tf.nn.relu(outputs)\n " ,
340- " "
339+ " return tf.nn.relu(outputs)\n "
341340 ]
342341 },
343342 {
357356 },
358357 {
359358 "cell_type" : " code" ,
360- "execution_count" : 0 ,
359+ "execution_count" : null ,
361360 "metadata" : {
362361 "colab_type" : " code"
363362 },
425424 " # Update metric(s)\n " ,
426425 " self.compiled_metrics.update_state(labels, tf.gather(outputs, indices))\n " ,
427426 " \n " ,
428- " return {m.name: m.result() for m in self.metrics}\n " ,
429- " "
427+ " return {m.name: m.result() for m in self.metrics}\n "
430428 ]
431429 },
432430 {
440438 },
441439 {
442440 "cell_type" : " code" ,
443- "execution_count" : 0 ,
441+ "execution_count" : null ,
444442 "metadata" : {
445443 "colab_type" : " code"
446444 },
499497 },
500498 {
501499 "cell_type" : " code" ,
502- "execution_count" : 0 ,
500+ "execution_count" : null ,
503501 "metadata" : {
504502 "colab_type" : " code"
505503 },
562560 },
563561 "nbformat" : 4 ,
564562 "nbformat_minor" : 0
565- }
563+ }
0 commit comments