Autodistill or Reparameterize? #1432
Unanswered
adrielkuek
asked this question in
Q&A
Replies: 2 comments
-
Hi, @adrielkuek. Let me first convert this issue into a discussion. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi @adrielkuek 👋🏻 This question should probably be directed to the autodistill team, but I'll try to answer it as best as I can. I think if you're using YOLO-World as a teacher model in autodistill to ultimately transfer that knowledge to YOLOv8, you're right. It's better to reparameterize YOLO-World. But autodistill also allows you to use other models as teacher models. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Search before asking
Question
So Robovision provides a framework with Autodistill to transfer knowledge from larger foundational models into smaller models on custom data that runs faster. https://roboflow.com/train/yolo-world-and-yolov8. I'm just curious on the differences between this framework, and that of Reparameterization of Yolo-World with the same custom dataset to improve efficiency on custom datasets (https://github.com/AILab-CVC/YOLO-World/blob/master/docs/reparameterize.md). From the Yolo-World paper, it does seem that reparameterization, at least for coco dataset's vocabulary, does seem to perform slightly better with Yolov8-fine-tuned.
Just wondering are there any merits to both of the methods? Have anybody evaluated either of the approach and which would be the recommended approach? Thanks!
Additional
No response
Beta Was this translation helpful? Give feedback.
All reactions