Skip to content

The average precision per class for the YOLOv8 and YOLO11 pre-trained on the COCO dataset

License

Notifications You must be signed in to change notification settings

developer0hye/yolov8-vs-yolo11

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

yolov8-vs-yolo11

Evaluation Results

YOLOv8 vs YOLO11 AP

Model Size YOLOv8 (mAP50-95) YOLO11 (mAP50-95) mAP50-95 Improvement (YOLO11 - YOLOv8)
N 0.371 0.392 0.021
S 0.447 0.467 0.020
M 0.501 0.514 0.013
L 0.529 0.532 0.003
X 0.540 0.547 0.007

YOLOv8 vs YOLO11 Parameters

Model Size YOLOv8 Parameters (M) YOLO11 Parameters (M) Reduction Rate (%)
n 3.2 2.6 18.75%
s 11.2 9.4 16.07%
m 25.9 20.1 22.39%
l 43.7 25.3 42.09%
x 68.2 56.9 16.55%

YOLOv8 vs YOLO11 FLOPs

Model Size YOLOv8 FLOPs (B) YOLO11 FLOPs (B) Reduction Rate (%)
n 8.7 6.5 25.29%
s 28.6 21.5 24.83%
m 78.9 68.0 13.81%
l 165.2 86.9 47.40%
x 257.8 194.9 24.40%

I didn't measure inference time of models because I was too lazy, and Ultralytics had already done it.

To examine the AP per class in detail, refer to the CSV files below:



fig1_n

fig1_s

fig1_m

fig1_l

fig1_x

Fun Facts

ChatGPT4 played a significant role in helping me with this. I provided the prompts, and it handled the details.

About

The average precision per class for the YOLOv8 and YOLO11 pre-trained on the COCO dataset

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages