Releases: roboflow/supervision
supervision-0.18.0
π Added
sv.PercentageBarAnnotatorallowing to annotate images and videos with percentage values representing confidence or other custom property. (#720)
import supervision as sv
image = ...
detections = sv.Detections(...)
percentage_bar_annotator = sv.PercentageBarAnnotator()
annotated_frame = percentage_bar_annotator.annotate(
scene=image.copy(),
detections=detections
)sv.RoundBoxAnnotatorallowing to annotate images and videos with rounded corners bounding boxes. (#702)sv.DetectionsSmootherallowing for smoothing detections over multiple frames in video tracking. (#696)
supervision-detection-smoothing.mp4
sv.OrientedBoxAnnotatorallowing to annotate images and videos with OBB (Oriented Bounding Boxes). (#770)
import cv2
import supervision as sv
from ultralytics import YOLO
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO("yolov8n-obb.pt")
result = model(image)[0]
detections = sv.Detections.from_ultralytics(result)
oriented_box_annotator = sv.OrientedBoxAnnotator()
annotated_frame = oriented_box_annotator.annotate(
scene=image.copy(),
detections=detections
)sv.ColorPalette.from_matplotliballowing users to create asv.ColorPaletteinstance from a Matplotlib color palette. (#769)
import supervision as sv
sv.ColorPalette.from_matplotlib('viridis', 5)
# ColorPalette(colors=[Color(r=68, g=1, b=84), Color(r=59, g=82, b=139), ...])π± Changed
sv.Detections.from_ultralyticsadding support for OBB (Oriented Bounding Boxes). (#770)sv.LineZoneto now accept a list of specific box anchors that must cross the line for a detection to be counted. This update marks a significant improvement from the previous requirement, where all four box corners were necessary. Users can now specify a single anchor, such assv.Position.BOTTOM_CENTER, or any other combination of anchors defined asList[sv.Position]. (#735)sv.Detectionsto support custom payload. (#700)sv.Color's andsv.ColorPalette's method of accessing predefined colors, transitioning from a function-based approach (sv.Color.red()) to a more intuitive and conventional property-based method (sv.Color.RED). (#756) (#769)
Warning
sv.ColorPalette.default() is deprecated and will be removed in supervision-0.21.0. Use sv.ColorPalette.DEFAULT instead.
sv.ColorPalette.DEFAULTvalue, giving users a more extensive set of annotation colors. (#769)
sv.Detections.from_roboflowtosv.Detections.from_inferencestreamlining its functionality to be compatible with both the both inference pip package and the Roboflow hosted API. (#677)
Warning
Detections.from_roboflow() is deprecated and will be removed in supervision-0.21.0. Use Detections.from_inference instead.
import cv2
import supervision as sv
from inference.models.utils import get_roboflow_model
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = get_roboflow_model(model_id="yolov8s-640")
result = model.infer(image)[0]
detections = sv.Detections.from_inference(result)π οΈ Fixed
sv.LineZonefunctionality to accurately update the counter when an object crosses a line from any direction, including from the side. This enhancement enables more precise tracking and analytics, such as calculating individual in/out counts for each lane on the road. (#735)
supervision-0.18.0-promo-sample-2-result.mp4
π Contributors
@onuralpszr (Onuralp SEZER), @HinePo (Rafael Levy), @xaristeidou (Christoforos Aristeidou), @revtheundead (Utku Γzbek), @paulguerrie (Paul Guerrie), @yeldarby (Brad Dwyer), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
supervision-0.17.1
π Added
- Support for Python 3.12.
π Contributors
@onuralpszr (Onuralp SEZER), @SkalskiP (Piotr Skalski)
supervision-0.17.0
π Added
sv.PixelateAnnotatorallowing to pixelate objects on images and videos. (#633)
walking-pixelate-corner-optimized.mp4
-
sv.TriangleAnnotatorallowing to annotate images and videos with triangle markers. (#652) -
sv.PolygonAnnotatorallowing to annotate images and videos with segmentation mask outline. (#602)>>> import supervision as sv >>> image = ... >>> detections = sv.Detections(...) >>> polygon_annotator = sv.PolygonAnnotator() >>> annotated_frame = polygon_annotator.annotate( ... scene=image.copy(), ... detections=detections ... )
walking-polygon-optimized.mp4
-
sv.assetsallowing download of video files that you can use in your demos. (#476)>>> from supervision.assets import download_assets, VideoAssets >>> download_assets(VideoAssets.VEHICLES) "vehicles.mp4"
-
Position.CENTER_OF_MASSallowing to place labels in center of mass of segmentation masks. (#605) -
sv.scale_boxesallowing to scalesv.Detections.xyxyvalues. (#651) -
sv.calculate_dynamic_text_scaleandsv.calculate_dynamic_line_thicknessallowing text scale and line thickness to match image resolution. (#637) -
sv.Color.as_hexallowing to extract color value in HEX format. (#620) -
sv.Classifications.from_timmallowing to load classification result from timm models. (#572) -
sv.Classifications.from_clipallowing to load classification result from clip model. (#478) -
sv.Detections.from_azure_analyze_imageallowing to load detection results from Azure Image Analysis. (#571)
π± Changed
-
sv.BoxMaskAnnotatorrenaming it tosv.ColorAnnotator. (#646) -
sv.MaskAnnotatorto make it 5x faster. (#606)
π οΈ Fixed
-
sv.DetectionDataset.from_yoloto ignore empty lines in annotation files. (#584) -
sv.BlurAnnotatorto trim negative coordinates before bluring detections. (#555) -
sv.TraceAnnotatorto respect trace position. (#511)
π Contributors
@onuralpszr (Onuralp SEZER), @hugoles (Hugo Dutra), @karanjakhar (Karan Jakhar), @kim-jeonghyun (Jeonghyun Kim), @fdloopes (
Felipe Lopes), @abhishek7kalra (Abhishek Kalra), @SummitStudiosDev, @xenteros @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
supervision-0.16.0
π Added
supervision-0.16.0-annotators.mp4
sv.BoxMaskAnnotatorallowing to annotate images and videos with mox masks. (#422)sv.HaloAnnotatorallowing to annotate images and videos with halo effect. (#433)
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> halo_annotator = sv.HaloAnnotator()
>>> annotated_frame = halo_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )sv.HeatMapAnnotatorallowing to annotate videos with heat maps. (#466)sv.DotAnnotatorallowing to annotate images and videos with dots. (#492)sv.draw_imageallowing to draw an image onto a given scene with specified opacity and dimensions. (#449)sv.FPSMonitorfor monitoring frames per second (FPS) to benchmark latency. (#280)- π€ Hugging Face Annotators space. (#454)
π± Changed
sv.LineZone.triggernow returnTuple[np.ndarray, np.ndarray]. The first array indicates which detections have crossed the line from outside to inside. The second array indicates which detections have crossed the line from inside to outside. (#482)- Annotator argument name from
color_map: strtocolor_lookup: ColorLookupenum to increase type safety. (#465) sv.MaskAnnotatorallowing 2x faster annotation. (#426)
π οΈ Fixed
- Poetry env definition allowing proper local installation. (#477)
sv.ByteTrackto returnnp.array([], dtype=int)whensvDetectionsis empty. (#430)- YOLONAS detection missing predication part added & fixed (#416)
- SAM detection at Demo Notebook
MaskAnnotator(color_map="index")color_mapset toindex(#416)
ποΈ Deleted
Warning
Deletedsv.Detections.from_yolov8andsv.Classifications.from_yolov8as those are now replaced bysv.Detections.from_ultralyticsandsv.Classifications.from_ultralytics. (#438)
π Contributors
@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @kapter, @keshav278 (Keshav Subramanian), @akashpambhar (Akash Pambhar), @AntonioConsiglio (Antonio Consiglio), @ashishdatta, @mario-dg (Mario da Graca), @ jayaBalaR (JAYABALAMBIKA.R), @abhishek7kalra (Abhishek Kalra), @PankajKrana (Pankaj Kumar Rana), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
supervision-0.15.0
π Added
supervision-0.15.0.mp4
-
sv.LabelAnnotatorallowing to annotate images and videos with text. (#170) -
sv.BoundingBoxAnnotatorallowing to annotate images and videos with bounding boxes. (#170) -
sv.BoxCornerAnnotatorallowing to annotate images and videos with just bounding box corners. (#170) -
sv.MaskAnnotatorallowing to annotate images and videos with segmentation masks. (#170) -
sv.EllipseAnnotatorallowing to annotate images and videos with ellipses (sports game style). (#170) -
sv.CircleAnnotatorallowing to annotate images and videos with circles. (#386) -
sv.TraceAnnotatorallowing to draw path of moving objects on videos. (#354) -
sv.BlurAnnotatorallowing to blur objects on images and videos. (#405)
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> bounding_box_annotator = sv.BoundingBoxAnnotator()
>>> annotated_frame = bounding_box_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )- Supervision usage example. You can now learn how to perform traffic flow analysis with Supervision. (#354)
traffic_analysis_result.mov
π± Changed
-
sv.Detections.from_roboflownow does not requireclass_listto be specified. Theclass_idvalue can be extracted directly from the inference response. (#399) -
sv.VideoSinknow allows to customize the output codec. (#381) -
sv.InferenceSlicercan now operate in multithreading mode. (#361)
π οΈ Fixed
sv.Detections.from_deepsparseto allow processing empty deepsparse result object. (#348)
π Contributors
@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @Killua7362 (Akshay Bhat), @fcakyon (Fatih C. Akyon), @akashAD98 (Akash A Desai), @Rajarshi-Misra (Rajarshi Misra), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
supervision-0.14.0
π Added
- Support for SAHI inference technique with
sv.InferenceSlicer. (#282)
>>> import cv2
>>> import supervision as sv
>>> import numpy as np
>>> from ultralytics import YOLO
>>> image = cv2.imread(SOURCE_IMAGE_PATH)
>>> model = YOLO(...)
>>> def callback(image_slice: np.ndarray) -> sv.Detections:
... result = model(image_slice)[0]
... return sv.Detections.from_ultralytics(result)
>>> slicer = sv.InferenceSlicer(callback = callback)
>>> detections = slicer(image)inference-slicer.mov
-
Detections.from_deepsparseto enable seamless integration with DeepSparse framework. (#297) -
sv.Classifications.from_ultralyticsto enable seamless integration with Ultralytics framework. This will enable you to use supervision with all models that Ultralytics supports. (#281)Warning
sv.Detections.from_yolov8andsv.Classifications.from_yolov8are now deprecated and will be removed withsupervision-0.16.0release. -
First supervision usage example script showing how to detect and track objects on video using YOLOv8 + Supervision. (#341)
detect-and-track-objects-on-video.mov
π± Changed
sv.ClassificationDatasetandsv.DetectionDatasetnow use image path (not image name) as dataset keys. (#296)
π οΈ Fixed
Detections.from_roboflowto filter out polygons with less than 3 points. (#300)
π Contributors
@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @mayankagarwals (Mayank Agarwal), @rizavelioglu (Riza Velioglu), @arjun-234 (Arjun D.), @mwitiderrick (Derrick Mwiti), @ShubhamKanitkar32, @gasparitiago (Tiago De Gaspari), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
supervision-0.13.0
π Added
- Support for mean average precision (mAP) for object detection models with
sv.MeanAveragePrecision. (#236)
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> dataset = sv.DetectionDataset.from_yolo(...)
>>> model = YOLO(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
... result = model(image)[0]
... return sv.Detections.from_yolov8(result)
>>> mean_average_precision = sv.MeanAveragePrecision.benchmark(
... dataset = dataset,
... callback = callback
... )
>>> mean_average_precision.map50_95
0.433- Support for
ByteTrackfor object tracking withsv.ByteTrack. (#256)
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> model = YOLO(...)
>>> byte_tracker = sv.ByteTrack()
>>> annotator = sv.BoxAnnotator()
>>> def callback(frame: np.ndarray, index: int) -> np.ndarray:
... results = model(frame)[0]
... detections = sv.Detections.from_yolov8(results)
... detections = byte_tracker.update_from_detections(detections=detections)
... labels = [
... f"#{tracker_id} {model.model.names[class_id]} {confidence:0.2f}"
... for _, _, confidence, class_id, tracker_id
... in detections
... ]
... return annotator.annotate(scene=frame.copy(), detections=detections, labels=labels)
>>> sv.process_video(
... source_path='...',
... target_path='...',
... callback=callback
... )byte_track_result_small.mp4
-
sv.Detections.from_ultralyticsto enable seamless integration with Ultralytics framework. This will enable you to usesupervisionwith all models that Ultralytics supports. (#222)Warning
sv.Detections.from_yolov8is now deprecated and will be removed withsupervision-0.15.0release. -
sv.Detections.from_paddledetto enable seamless integration with PaddleDetection framework. (#191) -
Support for loading PASCAL VOC segmentation datasets with
sv.DetectionDataset.. (#245)
π Contributors
@hardikdava (Hardik Dava), @kirilllzaitsev (Kirill Zaitsev), @onuralpszr (Onuralp SEZER), @dbroboflow, @mayankagarwals (Mayank Agarwal), @danigarciaoca (Daniel M. GarcΓa-OcaΓ±a), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
supervision-0.12.0
Warning
With thesupervision-0.12.0release, we are terminating official support for Python 3.7. (#179)
π Added
- Initial support for object detection model benchmarking with
sv.ConfusionMatrix. (#177)
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> dataset = sv.DetectionDataset.from_yolo(...)
>>> model = YOLO(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
... result = model(image)[0]
... return sv.Detections.from_yolov8(result)
>>> confusion_matrix = sv.ConfusionMatrix.benchmark(
... dataset = dataset,
... callback = callback
... )
>>> confusion_matrix.matrix
array([
[0., 0., 0., 0.],
[0., 1., 0., 1.],
[0., 1., 1., 0.],
[1., 1., 0., 0.]
])-
Detections.from_mmdetectionto enable seamless integration with MMDetection framework. (#173) -
Ability to install package in
headlessordesktopmode. (#130)
π± Changed
- Packing method from
setup.pytopyproject.toml. (#180)
π οΈ Fixed
sv.DetectionDataset.from_cooccan't be loaded when there are images without annotations. (#188)sv.DetectionDataset.from_yolocan't load background instances. (#226)
π Contributors
@kirilllzaitsev @hardikdava @onuralpszr @Ucag @SkalskiP @capjamesg
supervision-0.11.1
π οΈ Fixed
as_folder_structurefails to savesv.ClassificationDatasetwhen it is result of inference. (#165)
π Contributors
supervision-0.11.0
π Added
- Ability to load and save
sv.DetectionDatasetin COCO format usingas_cocoandfrom_cocomethods. (#150)
>>> import supervision as sv
>>> ds = sv.DetectionDataset.from_coco(
... images_directory_path='...',
... annotations_path='...'
... )
>>> ds.as_coco(
... images_directory_path='...',
... annotations_path='...'
... )- Ability to marge multiple
sv.DetectionDatasettogether usingmergemethod. (#158)
>>> import supervision as sv
>>> ds_1 = sv.DetectionDataset(...)
>>> len(ds_1)
100
>>> ds_1.classes
['dog', 'person']
>>> ds_2 = sv.DetectionDataset(...)
>>> len(ds_2)
200
>>> ds_2.classes
['cat']
>>> ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
>>> len(ds_merged)
300
>>> ds_merged.classes
['cat', 'dog', 'person']- Additional
startandendarguments tosv.get_video_frames_generatorallowing to generate frames only for a selected part of the video. (#162)
π οΈ Fixed
- Incorrect loading of YOLO dataset class names from
data.yaml. (#157)






