Why is only QFocal loss implemented and not GFocal loss? #2747
-
According to the paper proposing QFocal loss, []https://arxiv.org/pdf/2006.04388.pdf], Distribution Focal loss is also proposed in order to fuzzify otherwise inflexible representations of bounding boxes. Is there a reason only QFocal loss was chosen to be implemented and not Distribution Focal loss as well? @yxNONG |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 5 replies
-
@borschy QFocal loss was implemented in PR #1482 by @yxNONG, who might best be able to answer your question. |
Beta Was this translation helpful? Give feedback.
-
Hello, if qfocal loss wants to be applied to the confidence loss, according to my understanding, do you want to change the model.gr in train.py from the default 1.0 to iou? I look forward to receiving your reply, thank you! |
Beta Was this translation helpful? Give feedback.
@borschy QFocal loss was implemented in PR #1482 by @yxNONG, who might best be able to answer your question.