You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for sharing codes.
I wonder whether there is a typo in the implementation of KLDivTeacherList class.
The implementation is
classKLDivTeacherList(nn.Module):
def__init__(self):
super(KLDivTeacherList, self).__init__()
self.kl=torch.nn.KLDivLoss(reduction="batchmean")
defforward(self, scores, labels):
loss=self.kl(scores.softmax(-1),labels.softmax(-1)) # is this a typo?returnloss
As with NLLLoss, the input given is expected to contain log-probabilities.
The targets are interpreted as probabilities by default,
So, from what I understand, the forward function should be
defforward(self, scores, labels):
# loss = self.kl(scores.softmax(-1),labels.softmax(-1)) # is this a typo?# before: softmax of scores# after : log-softmax of scoresloss=self.kl(torch.nn.functional.log_softmax(scores, dim=-1), torch.nn.functional.softmax(labels, dim=-1))
returnloss
I wonder if this is a typo or if I'm missing something.
Thanks in advance.
The text was updated successfully, but these errors were encountered:
Thank you for sharing codes.
I wonder whether there is a typo in the implementation of KLDivTeacherList class.
The implementation is
However, PyTorch documentation for
KLDivLoss
(https://pytorch.org/docs/1.9.1/generated/torch.nn.KLDivLoss.html) saysSo, from what I understand, the forward function should be
I wonder if this is a typo or if I'm missing something.
Thanks in advance.
The text was updated successfully, but these errors were encountered: