supervision-0.19.0
π§βπ³ Cookbooks
Supervision Cookbooks - A curated open-source collection crafted by the community, offering practical examples, comprehensive guides, and walkthroughs for leveraging Supervision alongside diverse Computer Vision models. (#860)
π Added
sv.CSVSinkallowing for the straightforward saving of image, video, or stream inference results in a.csvfile. (#818)
import supervision as sv
from ultralytics import YOLO
model = YOLO(<SOURCE_MODEL_PATH>)
csv_sink = sv.CSVSink(<RESULT_CSV_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)
with csv_sink:
for frame in frames_generator:
result = model(frame)[0]
detections = sv.Detections.from_ultralytics(result)
csv_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})traffic_csv_2.mp4
sv.JSONSinkallowing for the straightforward saving of image, video, or stream inference results in a.jsonfile. (#819)
import supervision as sv
from ultralytics import YOLO
model = YOLO(<SOURCE_MODEL_PATH>)
json_sink = sv.JSONSink(<RESULT_JSON_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)
with json_sink:
for frame in frames_generator:
result = model(frame)[0]
detections = sv.Detections.from_ultralytics(result)
json_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})sv.mask_iou_batchallowing to compute Intersection over Union (IoU) of two sets of masks. (#847)sv.mask_non_max_suppressionallowing to perform Non-Maximum Suppression (NMS) on segmentation predictions. (#847)sv.CropAnnotatorallowing users to annotate the scene with scaled-up crops of detections. (#888)
import cv2
import supervision as sv
from inference import get_model
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = get_model(model_id="yolov8n-640")
result = model.infer(image)[0]
detections = sv.Detections.from_inference(result)
crop_annotator = sv.CropAnnotator()
annotated_frame = crop_annotator.annotate(
scene=image.copy(),
detections=detections
)supervision-0.19.0-promo.mp4
π± Changed
sv.ByteTrack.resetallowing users to clear trackers state, enabling the processing of multiple video files in sequence. (#827)sv.LineZoneAnnotatorallowing to hide in/out count usingdisplay_in_countanddisplay_out_countproperties. (#802)sv.ByteTrackinput arguments and docstrings updated to improve readability and ease of use. (#787)
Warning
The track_buffer, track_thresh, and match_thresh parameters in sv.ByterTrack are deprecated and will be removed in supervision-0.23.0. Use lost_track_buffer, track_activation_threshold, and minimum_matching_threshold instead.
sv.PolygonZoneto now accept a list of specific box anchors that must be in zone for a detection to be counted. (#910)
Warning
The triggering_position parameter in sv.PolygonZone is deprecated and will be removed in supervision-0.23.0. Use triggering_anchors instead.
- Annotators adding support for Pillow images. All supervision Annotators can now accept an image as either a numpy array or a Pillow Image. They automatically detect its type, draw annotations, and return the output in the same format as the input. (#875)
π οΈ Fixed
sv.DetectionsSmootherremovingtracking_idfromsv.Detections. (#944)sv.DetectionDatasetwhich, after changes introduced insupervision-0.18.0, failed to load datasets in YOLO, PASCAL VOC, and COCO formats.
π Contributors
@onuralpszr (Onuralp SEZER), @LinasKo (Linas Kondrackis), @LeviVasconcelos (Levi Vasconcelos), @AdonaiVera (Adonai Vera), @xaristeidou (Christoforos Aristeidou), @Kadermiyanyedi (Kader Miyanyedi), @NickHerrig (Nick Herrig), @PacificDou (Shuyang Dou), @iamhatesz (Tomasz Wrona), @capjamesg (James Gallagher), @sansyo, @SkalskiP (Piotr Skalski)