Update README.md
Browse files
README.md
CHANGED
|
@@ -1,6 +1,74 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: other
|
| 3 |
-
license_name: sla0044
|
| 4 |
-
license_link: >-
|
| 5 |
-
https://github.com/STMicroelectronics/stm32ai-modelzoo/instance_segmentation/yolov8n_seg/LICENSE.md
|
| 6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
license_name: sla0044
|
| 4 |
+
license_link: >-
|
| 5 |
+
https://github.com/STMicroelectronics/stm32ai-modelzoo/instance_segmentation/yolov8n_seg/LICENSE.md
|
| 6 |
+
pipeline_tag: image-segmentation
|
| 7 |
+
---
|
| 8 |
+
# Yolov8n_seg
|
| 9 |
+
|
| 10 |
+
## **Use case** : `Instance segmentation`
|
| 11 |
+
|
| 12 |
+
# Model description
|
| 13 |
+
|
| 14 |
+
Yolov8n_seg is a lightweight and efficient model designed for instance segmentation tasks. It is part of the YOLO (You Only Look Once) family of models, known for their real-time object detection capabilities. The "n" in Yolov8n_seg indicates that it is a nano version, optimized for speed and resource efficiency, making it suitable for deployment on devices with limited computational power, such as mobile devices and embedded systems.
|
| 15 |
+
|
| 16 |
+
Yolov8n_seg is implemented in Pytorch by Ultralytics and is quantized in int8 format using tensorflow lite converter.
|
| 17 |
+
|
| 18 |
+
## Network information
|
| 19 |
+
| Network Information | Value |
|
| 20 |
+
|-------------------------|--------------------------------------|
|
| 21 |
+
| Framework | Tensorflow |
|
| 22 |
+
| Quantization | int8 |
|
| 23 |
+
| Paper | https://arxiv.org/pdf/2305.09972 |
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
## Recommended platform
|
| 28 |
+
| Platform | Supported | Recommended |
|
| 29 |
+
|----------|-----------|-------------|
|
| 30 |
+
| STM32L0 | [] | [] |
|
| 31 |
+
| STM32L4 | [] | [] |
|
| 32 |
+
| STM32U5 | [] | [] |
|
| 33 |
+
| STM32MP1 | [] | [] |
|
| 34 |
+
| STM32MP2 | [x] | [] |
|
| 35 |
+
| STM32N6| [x] | [x] |
|
| 36 |
+
|
| 37 |
+
---
|
| 38 |
+
# Performances
|
| 39 |
+
|
| 40 |
+
## Metrics
|
| 41 |
+
Measures are done with default STM32Cube.AI configuration with enabled input / output allocated option.
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
### Reference **NPU** memory footprint based on COCO dataset
|
| 45 |
+
|
| 46 |
+
|Model | Dataset | Format | Resolution | Series | Internal RAM (KiB)| External RAM (KiB)| Weights Flash (KiB) | STM32Cube.AI version | STEdgeAI Core version |
|
| 47 |
+
|----------|------------------|--------|-------------|------------------|------------------|---------------------|-------|----------------------|-------------------------|
|
| 48 |
+
| [Yolov8n seg per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/segmentation/yolov8n_256_quant_pc_uf_seg_coco-st.tflite) | COCO | Int8 | 256x256x3 | STM32N6 | 2128 | 0.0 | 3425.39 | 10.0.0 | 2.0.0
|
| 49 |
+
| [Yolov8n seg per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/segmentation/yolov8n_320_quant_pc_uf_seg_coco-st.tflite) | COCO | Int8 | 320x320x3 | STM32N6 | 2564.06 | 0.0 | 3467.56 | 10.0.0 | 2.0.0 |
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
### Reference **NPU** inference time based on COCO Person dataset
|
| 54 |
+
| Model | Dataset | Format | Resolution | Board | Execution Engine | Inference time (ms) | Inf / sec | STM32Cube.AI version | STEdgeAI Core version |
|
| 55 |
+
|--------|------------------|--------|-------------|------------------|------------------|---------------------|-------|----------------------|-------------------------|
|
| 56 |
+
| [YOLOv8n seg per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/segmentation/yolov8n_256_quant_pc_uf_seg_coco-st.tflite) | COCO-Person | Int8 | 256x256x3 | STM32N6570-DK | NPU/MCU | 37.59 | 26.61 | 10.0.0 | 2.0.0 |
|
| 57 |
+
| [YOLOv8n seg per channel](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/segmentation/yolov8n_320_quant_pc_uf_seg_coco-st.tflite) | COCO-Person | Int8 | 320x320x3 | STM32N6570-DK | NPU/MCU | 53.21 | 18.79 | 10.0.0 | 2.0.0 |
|
| 58 |
+
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
## Retraining and Integration in a Simple Example
|
| 62 |
+
Please refer to the stm32ai-modelzoo-services GitHub [here](https://github.com/STMicroelectronics/stm32ai-modelzoo-services).
|
| 63 |
+
For instance segmentation, the models are stored in the Ultralytics repository. You can find them at the following link: [Ultralytics YOLOv8-STEdgeAI Models](https://github.com/stm32-hotspot/ultralytics/blob/main/examples/YOLOv8-STEdgeAI/stedgeai_models/).
|
| 64 |
+
|
| 65 |
+
Please refer to the [Ultralytics documentation](https://docs.ultralytics.com/tasks/segment/#train) to retrain the model.
|
| 66 |
+
|
| 67 |
+
|
| 68 |
+
## References
|
| 69 |
+
|
| 70 |
+
<a id="1">[1]</a> T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, "Microsoft COCO: Common Objects in Context." European Conference on Computer Vision (ECCV), 2014. [Link](https://arxiv.org/abs/1405.0312)
|
| 71 |
+
|
| 72 |
+
<a id="2">[2]</a> Ultralytics, "YOLOv8: Next-Generation Object Detection and Segmentation Model." Ultralytics, 2023. [Link](https://github.com/ultralytics/ultralytics)
|
| 73 |
+
|
| 74 |
+
|