You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that helper.unpackAnnotations(trainCats, ...) only returns annotations of such images that contain all of the provided training categories. In the example of persons and cars, this method would only use annotations of images that contain both, persons and cars. Thus, we might lose relevant training data from person-only or car-only images.
Digging into the CocoAPI, this behavior arises from taking the intersection
for i=1:length(t), ids=intersect(ids,t{i}); end
in CocoAPI.m, as this is called from line 11
imgIds = coco.getImgIds('catIds',catIds);
in unpackAnnotations.m.
I just stumbled over this behavior when adapting the Mask R-CNN example to my own training set. But as I'm not that familiar with the codebase, this might as well be intended behavior.
The text was updated successfully, but these errors were encountered:
I know this repository isn't the most active, so I'll chime in with my two-cents from using it in the past.
From looking at it, it seems as though this is the intended functionality of the CocoAPI as it sequentially filters the ids based on the filtering struct passed in. This could potentially be mentioned here as an enhancement to the API or for more clarification.
As for fixing the issue right now, you could filter twice (once for cars and once for people) and combine the imgIds returned. The model shouldn't be penalized by having images empty for a certain class unless a misclassification was made.
I think this should accomplish your goal to keep going on whichever project you are working on, however I think this question would be best posed as an issue on the CocoAPI GitHub.
Hopefully this was helpful, but again it's just my two-cents into the two repositories.
It seems that
helper.unpackAnnotations(trainCats, ...)
only returns annotations of such images that contain all of the provided training categories. In the example of persons and cars, this method would only use annotations of images that contain both, persons and cars. Thus, we might lose relevant training data from person-only or car-only images.Digging into the CocoAPI, this behavior arises from taking the intersection
in
CocoAPI.m
, as this is called from line 11in
unpackAnnotations.m
.I just stumbled over this behavior when adapting the Mask R-CNN example to my own training set. But as I'm not that familiar with the codebase, this might as well be intended behavior.
The text was updated successfully, but these errors were encountered: