on-line hard mining#708
on-line hard mining#708CoinCheung wants to merge 1 commit intofacebookresearch:masterfrom CoinCheung:master
Conversation
| beta=1, | ||
| ).sum(dim=1, keepdim=True) | ||
| ohem_loss = classification_loss | ||
| ohem_loss[sampled_pos_inds_subset[:, None]] += box_loss |
There was a problem hiding this comment.
@hpandeycodeit @pietern @sotte This line operation maybe is wrong, which should be modifed into
ohem_loss[sampled_pos_inds_subset[:, None]] = ohem_loss[sampled_pos_inds_subset[:, None]] + box_loss
There was a problem hiding this comment.
Hi,
Thanks for reviewing. Would you please explain why this is wrong? I just do not understand the difference.
There was a problem hiding this comment.
@CoinCheung I am not sure for this, I have tried your code above, but I find it fail to add the box_loss to the correspond classification loss at ohem. Maybe you can debug whether it's the same case.
Actually I changed the code
ohem_loss = classification_loss.clone()
ohem_loss[sampled_pos_inds_subset[:, None]] = ohem_loss[sampled_pos_inds_subset[:, None]] + box_loss
There was a problem hiding this comment.
I would the very least not use an in-place operation, as it might break backpropagation depending on the operation.
You could replace it with something like what @WeihongM mentioned
There was a problem hiding this comment.
Thanks for reviewing this. I have modified as you suggested.
fmassa
left a comment
There was a problem hiding this comment.
This looks generally good, thanks!
Could you please address the comments? This is then good to merge!
| p = PolygonInstance(p, size) | ||
| if len(p) > 0: | ||
| self.polygons.append(p) | ||
| # if len(p) > 0: |
There was a problem hiding this comment.
Why do we have this change?
There was a problem hiding this comment.
Hi,
I believe this modification anymore is no longer needed any more. I modified this because our json files have empty segmentation field in the annotations part with will break the logic here.
There was a problem hiding this comment.
I have re-committed this pr, and new link is here. If you please, we could continue communicating in that page :)
| beta=1, | ||
| ).sum(dim=1, keepdim=True) | ||
| ohem_loss = classification_loss | ||
| ohem_loss[sampled_pos_inds_subset[:, None]] += box_loss |
There was a problem hiding this comment.
I would the very least not use an in-place operation, as it might break backpropagation depending on the operation.
You could replace it with something like what @WeihongM mentioned
|
@CoinCheung You can get the delete branch in fork back through betaflight/betaflight#8112 (comment) |
|
@CoinCheung Why the ohem_loss.size(0) compared with self.batch_size_per_image? It seems like ohem_loss includes all proposals from all batch images and maybe it should be torch.split to image-wise and then use top_k to select? |
|
@CoinCheung same question with Louie Yang, think this implementation have some problem. |
This is a demo of ohem. I have tested on our own dataset, and the map/mar get boosted roughly 1 point.
The original paper use nms to eliminate overlapping rois, but the original paper use ss to generate proposals, while this repository relies on rpn to generate proposals. So I did not implement associated part.
If this modification has potential risks or errors, please point them out and I will fix it.