Dear authors,
Great work! Thank you very much for opening source this great repo.
In my experience of reproducing the paper results, I found that experiments based on the repo code always overperform the results in your paper by four points (44.6, not 39.06). Do you have any clue of this? Do you use a different embedding model back then? Or is there any method update in the repo that is different from the paper?
Thanks again!
Dear authors,
Great work! Thank you very much for opening source this great repo.
In my experience of reproducing the paper results, I found that experiments based on the repo code always overperform the results in your paper by four points (44.6, not 39.06). Do you have any clue of this? Do you use a different embedding model back then? Or is there any method update in the repo that is different from the paper?
Thanks again!