-
Notifications
You must be signed in to change notification settings - Fork 125
Closed
Description
Hi Kate, thanks for the Pytorch code of MAML!
I have two questions(in which I suspect a bug?) on your implementation.
Line 10, Algorithm2 from the original paper indicates that meta_update is performed using each D'_i. To do so with your code, I think the function meta_update need access to every task sampled, since each task contains its D'_i in your implementation.
Line 172 in 75907ac
self.meta_update(task, grads) |
However, it seems that you perform meta_update with a single task, resulting in using only one D'_i of a specific task.
Line 10 also states that meta-loss is calculated with adapted parameters.
Line 71 in 75907ac
loss, out = forward_pass(self.net, in_, target) |
You seem to have calculated meta-loss with self.net, which I think is "original parameters"(\theta_i) in stead of adapted parameters.
Am I missing something?
Metadata
Metadata
Assignees
Labels
No labels